Open In Colab

LICENSING NOTICE¶

Note that all users who use Vital DB, an open biosignal dataset, must agree to the Data Use Agreement below. If you do not agree, please close this window. The Data Use Agreement is available here: https://vitaldb.net/dataset/#h.vcpgs1yemdb5

This is the development version of the project code¶

For the Project Draft submission see the DL4H_Team_24_Project_Draft.ipynb notebook in the project repository.

Project repository¶

The project repository can be found at: https://github.com/abarrie2/cs598-dlh-project

Project video¶

The project video can be found at:

Introduction¶

This project aims to reproduce findings from the paper titled "Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study" by Jo Y-Y et al. (2022) [1]. This study introduces a deep learning model that predicts intraoperative hypotension (IOH) events before they occur, utilizing a combination of arterial blood pressure (ABP), electroencephalogram (EEG), and electrocardiogram (ECG) signals.

Background of the Problem¶

Intraoperative hypotension (IOH) is a common and significant surgical complication defined by a mean arterial pressure drop below 65 mmHg. It is associated with increased risks of myocardial infarction, acute kidney injury, and heightened postoperative mortality. Effective prediction and timely intervention can substantially enhance patient outcomes.

Evolution of IOH Prediction¶

Initial attempts to predict IOH primarily used arterial blood pressure (ABP) waveforms. A foundational study by Hatib F et al. (2018) titled "Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis" [2] showed that machine learning could forecast IOH events using ABP with reasonable accuracy. This finding spurred further research into utilizing various physiological signals for IOH prediction.

Subsequent advancements included the development of the Acumen™ hypotension prediction index, which was studied in "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial" by Bao X et al. (2024) [3]. This trial integrated a hypotension prediction index into blood pressure monitoring equipment, demonstrating its effectiveness in reducing the number and duration of IOH events during surgeries. Further study is needed to determine whether this resultant reduction in IOH events transalates into improved postoperative patient outcomes.

Current Study¶

Building on these advancements, the paper by Jo Y-Y et al. (2022) proposes a deep learning approach that enhances prediction accuracy by incorporating EEG and ECG signals along with ABP. This multi-modal method, evaluated over prediction windows of 3, 5, 10, and 15 minutes, aims to provide a comprehensive physiological profile that could predict IOH more accurately and earlier. Their results indicate that the combination of ABP and EEG significantly improves performance metrics such as AUROC and AUPRC, outperforming models that use fewer signals or different combinations.

Our project seeks to reproduce and verify Jo Y-Y et al.'s results to assess whether this integrated approach can indeed improve IOH prediction accuracy, thereby potentially enhancing surgical safety and patient outcomes.

Scope of Reproducibility:¶

The original paper investigated the following hypotheses:

  1. Hypothesis 1: A model using ABP and ECG will outperform a model using ABP alone in predicting IOH.
  2. Hypothesis 2: A model using ABP and EEG will outperform a model using ABP alone in predicting IOH.
  3. Hypothesis 3: A model using ABP, EEG, and ECG will outperform a model using ABP alone in predicting IOH.

Results were compared using AUROC and AUPRC scores. Based on the results described in the original paper, we expect that Hypothesis 2 will be confirmed, and that Hypotheses 1 and 3 will not be confirmed.

In order to perform the corresponding experiments, we will implement a CNN-based model that can be configured to train and infer using the following four model variations:

  1. ABP data alone
  2. ABP and ECG data
  3. ABP and EEG data
  4. ABP, ECG, and EEG data

We will measure the performance of these configurations using the same AUROC and AUPRC metrics as used in the original paper. To test hypothesis 1 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 2. To test hypothesis 2 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 3. To test hypothesis 3 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 4. For all of the above measures and experiment combinations, we will operate multiple experiments where the time-to-IOH event prediction will use the following prediction windows:

  1. 3 minutes before event
  2. 5 minutes before event
  3. 10 minutes before event
  4. 15 minutes before event

From the original paper, the predictive power of ABP, ECG and ABP + ECG models at 3-, 5-, 10- and 15-minute prediction windows: Predictive power of ABP, ECG and ABP + ECG models at 3-, 5-, 10- and 15-minute prediction windows

Modifications made for demo mode¶

In order to demonstrate the functioning of the code in a short (ie, <8 minute limit) the following options and modifications were used:

  1. MAX_CASES was set to 20. The total number of cases to be used in the full training set is 3296, but the smaller numbers allows demonstration of each section of the pipeline.
  2. vitaldb_cache is prepopulated in Google Colab. The cache file is approx. 800MB and contains the raw and mini-fied copies of the source dataset and is downloaded from Google Drive. This is much faster than using the vitaldb API, but is again only a fraction of the data. The full dataset can be downloaded with the API or prepopulated by following the instructions in the "Bulk Data Download" section below.
  3. max_epochs is set to 6. With the small dataset, training is fast and shows the decreasing training and validation losses. In the full model run, max_epochs will be set to 100. In both cases early stopping is enabled and will stop training if the validation losses stop decreasing for five consecutive epochs.
  4. Only the "ABP + EEG" combination will be run. In the final report, additional combinations will be run, as discussed later.
  5. Only the 3-minute prediction window will be run. In the final report, additional prediction windows (5, 10 and 15 minutes) will be run, as discussed later.
  6. No ablations are run in the demo. These will be completed for the final report.

Methodology¶

The methodology section is composed of the following subsections: Environment, Data and Model.

  • Environment: This section describes the setup of the environment, including the installation of necessary libraries and the configuration of the runtime environment.
  • Data: This section describes the dataset used in the study, including its collection and preprocessing.
    • Data Collection: This section describes the process of downloading the dataset from VitalDB and populating the local data cache.
    • Data Preprocessing: This section describes the preprocessing steps applied to the dataset, including data selection, data cleaning, and feature extraction.
  • Model: This section describes the deep learning model used in the study, including its implementation, training, and evaluation.
    • Model Implementation: This section describes the implementation of the deep learning model, including the architecture, loss function, and optimization algorithm.
    • Model Training: This section describes the training process, including the training loop, hyperparameters, and training strategy.
    • Model Evaluation: This section describes the evaluation process, including the metrics used, the evaluation strategy, and the results obtained.

Environment¶

Create environment¶

The environment setup differs based on whether you are running the code on a local machine or on Google Colab. The following sections provide instructions for setting up the environment in each case.

Local machine¶

Create conda environment for the project using the environment.yml file:

conda env create --prefix .envs/dlh-team24 -f environment.yml

Activate the environment with:

conda activate .envs/dlh-team24

This environment specifies Python 3.12.2.

Google Colab¶

The following code snippet installs the required packages and downloads the necessary files in a Google Colab environment:

In [1]:
# Google Colab environments have a `/content` directory. Use this as a proxy for running Colab-only code
COLAB_ENV = "google.colab" in str(get_ipython())
if COLAB_ENV:
    #install vitaldb
    %pip install vitaldb

    # Executing in Colab therefore download cached preprocessed data.
    # TODO: Integrate this with the setup local cache data section below.
    # Check for file existence before overwriting.
    import gdown
    gdown.download(id="15b5Nfhgj3McSO2GmkVUKkhSSxQXX14hJ", output="vitaldb_cache.tgz")
    !tar -zxf vitaldb_cache.tgz

    # Download sqi_filter.csv from github repo
    !wget https://raw.githubusercontent.com/abarrie2/cs598-dlh-project/main/sqi_filter.csv

All other required packages are already installed in the Google Colab environment. As of May 5, 2024, Google Colab uses Python 3.10.12.

Load environment¶

In [2]:
# Import packages
import os
import random
import sys
import uuid
import copy
from collections import defaultdict
from glob import glob

from timeit import default_timer as timer

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.signal import butter, lfilter, spectrogram
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, roc_auc_score, precision_recall_curve, auc, confusion_matrix
from sklearn.metrics import RocCurveDisplay, PrecisionRecallDisplay, average_precision_score
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
import torch
from torch.utils.data import Dataset
import vitaldb
import h5py

import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
from datetime import datetime

Start a timer to measure notebook runtime:

In [3]:
global_time_start = timer()

Set random seeds to generate consistent results:

In [4]:
RANDOM_SEED = 42

def reset_random_state():
    random.seed(RANDOM_SEED)
    np.random.seed(RANDOM_SEED)
    torch.manual_seed(RANDOM_SEED)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(RANDOM_SEED)
        torch.cuda.manual_seed_all(RANDOM_SEED)
        torch.backends.cudnn.deterministic = True
        torch.backends.cudnn.benchmark = False
    os.environ["PYTHONHASHSEED"] = str(RANDOM_SEED)
    
reset_random_state()

Set device to GPU or MPS if available

In [5]:
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if (torch.backends.mps.is_available() and torch.backends.mps.is_built()) else "cpu")
print(f"Using device: {device}")
Using device: mps

Define class to print to console and simultaneously save to file:

In [6]:
class ForkedStdout:
    def __init__(self, file_path):
        self.file = open(file_path, 'w')
        self.stdout = sys.stdout

    def write(self, message):
        self.stdout.write(message)
        self.file.write(message)

    def flush(self):
        self.stdout.flush()
        self.file.flush()

    def __enter__(self):
        sys.stdout = self

    def __exit__(self, exc_type, exc_val, exc_tb):
        sys.stdout = self.stdout
        self.file.close()

Data¶

Data Description¶

Source¶

Data for this project is sourced from the open biosignal VitalDB dataset as described in "VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients" by Lee H-C et al. (2022) [4], which contains perioperative vital signs and numerical data from 6,388 cases of non-cardiac (general, thoracic, urological, and gynecological) surgery patients who underwent routine or emergency surgery at Seoul National University Hospital between 2016 and 2017. The dataset includes ABP, ECG, and EEG signals, as well as other physiological data. The dataset is available through an API and Python library, and at PhysioNet: https://physionet.org/content/vitaldb/1.0.0/

Statistics¶

Characteristics of the dataset: | Characteristic | Value | Details | |-----------------------|-----------------------------|------------------------| | Total number of cases | 6,388 | | | Sex (male) | 3,243 (50.8%) | | | Age (years) | 59 | Range: 48-68 | | Height (cm) | 162 | Range: 156-169 | | Weight (kg) | 61 | Range: 53-69 | | Tram-Rac 4A tracks | 6,355 (99.5%) | Sampling rate: 500Hz | | BIS Vista tracks | 5,566 (87.1%) | Sampling rate: 128Hz | | Case duration (min) | 189 | Range: 27-1041 |

Labels are only known after processing the data. In the original paper, there were an average of 1.6 IOH events per case and 5.7 non-events per case so we expect approximately 10,221 IOH events and 364,116 non-events in the dataset.

Data Processing¶

Data will be processed as follows:

  1. Load the dataset from VitalDB, or from a local cache if previously downloaded.
  2. Apply the inclusion and exclusion selection criteria to filter the dataset according to surgery metadata.
  3. Generate a minified dataset by discarding all tracks except ABP, ECG, and EEG.
  4. Preprocess the data by applying band-pass and z-score normalization to the ECG and EEG signals, and filtering out ABP signals below a Signal Quality Index (SQI) threshold.
  5. Generate event and non-event samples by extracting 60-second segments around IOH events and non-events.
  6. Split the dataset into training, validation, and test sets with a 6:1:3 ratio, ensuring that samples from a single case are not split across different sets to avoid data leakage.

Set Up Local Data Caches¶

VitalDB data is static, so local copies can be stored and reused to avoid expensive downloads and to speed up data processing.

The default directory defined below is in the project .gitignore file. If this is modified, the new directory should also be added to the project .gitignore.

In [7]:
VITALDB_CACHE = './vitaldb_cache'
VITAL_ALL = f"{VITALDB_CACHE}/vital_all"
VITAL_MINI = f"{VITALDB_CACHE}/vital_mini"
VITAL_METADATA = f"{VITALDB_CACHE}/metadata"
VITAL_MODELS = f"{VITALDB_CACHE}/models"
VITAL_RUNS = f"{VITALDB_CACHE}/runs"
VITAL_PREPROCESS_SCRATCH = f"{VITALDB_CACHE}/data_scratch"
VITAL_EXTRACTED_SEGMENTS = f"{VITALDB_CACHE}/segments"
In [8]:
TRACK_CACHE = None
SEGMENT_CACHE = None

# when USE_MEMORY_CACHING is enabled, track data will be persisted in an in-memory cache. Not useful once we have already pre-extracted all event segments
# DON'T USE: Stores items in memory that are later not used. Causes OOM on segment extraction.
USE_MEMORY_CACHING = False

# When RESET_CACHE is set to True, it will ensure the TRACK_CACHE is disposed and recreated when we do dataset initialization.
# Use as a shortcut to wiping cache rather than restarting kernel
RESET_CACHE = False

PREDICTION_WINDOW = 3
#PREDICTION_WINDOW = 'ALL'

ALL_PREDICTION_WINDOWS = [3, 5, 10, 15]

# Maximum number of cases of interest for which to download data.
# Set to a small value (ex: 20) for demo purposes, else set to None to disable and download and process all.
MAX_CASES = None
#MAX_CASES = 300

# Preloading Cases: when true, all matched cases will have the _mini tracks extracted and put into in-mem dict
PRELOADING_CASES = False
PRELOADING_SEGMENTS = True
# Perform Data Preprocessing: do we want to take the raw vital file and extract segments of interest for training?
PERFORM_DATA_PREPROCESSING = False
In [9]:
if not os.path.exists(VITALDB_CACHE):
  os.mkdir(VITALDB_CACHE)
if not os.path.exists(VITAL_ALL):
  os.mkdir(VITAL_ALL)
if not os.path.exists(VITAL_MINI):
  os.mkdir(VITAL_MINI)
if not os.path.exists(VITAL_METADATA):
  os.mkdir(VITAL_METADATA)
if not os.path.exists(VITAL_MODELS):
  os.mkdir(VITAL_MODELS)
if not os.path.exists(VITAL_RUNS):
  os.mkdir(VITAL_RUNS)
if not os.path.exists(VITAL_PREPROCESS_SCRATCH):
  os.mkdir(VITAL_PREPROCESS_SCRATCH)
if not os.path.exists(VITAL_EXTRACTED_SEGMENTS):
  os.mkdir(VITAL_EXTRACTED_SEGMENTS)

print(os.listdir(VITALDB_CACHE))
['models_', 'runs_03_parameter_tuning_pred_plots', 'models_old_0505', 'segments_filter_neg', 'segments_bak', 'runs_old', 'runs_03_15_parameter_tuning', 'segments_bak_0505', '.DS_Store', 'segments_filter_neg_pos', 'vital_mini_bak_0501', 'vital_all', 'segments_sizes_sp.txt', 'ABP_12_RESIDUAL_BLOCKS_64_BATCH_SIZE_1e-04_LEARNING_RATE_015_MINS__ALL_MAX_CASES_a8a3f484_0004.model', 'models_all_cases_baseline', 'segments_golden', 'models', 'docs', 'vital_mini.tar', 'models_03_parameter_tuning_pred_plots', 'data_scratch', 'segments_md5_sp.txt', 'vital_file_md5_mw.txt', 'segments_bak_0501', 'osfs', 'runs_03_15', 'vital_mini', 'segments_filter_none', 'vital_file_mini_md5_sp.txt', 'vital_file_mini_file_sizes_sp.txt', 'runs', 'metadata', 'runs_old_0505', 'segments', 'models_old', 'runs_03_segment_fixes', 'vital_file_md5_sp.txt', 'models_03_15_parameter_tuning']

Bulk Data Download¶

This step is not required, but will significantly speed up downstream processing and avoid a high volume of API requests to the VitalDB web site.

Note: The dataset is slightly different depending on whether it is downloaded from the API or from Physionet. In almost all cases, the relevant tracks are identical between the two, but this is not always true.

The cache population code checks if the .vital files are locally available, and can be populated by calling the vitaldb API or by manually prepopulating the cache (recommended)

  • Manually downloaded the dataset from the following site: https://physionet.org/content/vitaldb/1.0.0/
    • Download the zip file in a browser, or
    • Use wget -r -N -c -np https://physionet.org/files/vitaldb/1.0.0/ to download the files in a terminal
  • Move the contents of vital_files into the ${VITAL_ALL} directory.
In [10]:
# Returns the Pandas DataFrame for the specified dataset.
#   One of 'cases', 'labs', or 'trks'
# If the file exists locally, create and return the DataFrame.
# Else, download and cache the csv first, then return the DataFrame.
def vitaldb_dataframe_loader(dataset_name):
    if dataset_name not in ['cases', 'labs', 'trks']:
        raise ValueError(f'Invalid dataset name: {dataset_name}')
    file_path = f'{VITAL_METADATA}/{dataset_name}.csv'
    if os.path.isfile(file_path):
        print(f'{dataset_name}.csv exists locally.')
        df = pd.read_csv(file_path)
        return df
    else:
        print(f'downloading {dataset_name} and storing in the local cache for future reuse.')
        df = pd.read_csv(f'https://api.vitaldb.net/{dataset_name}')
        df.to_csv(file_path, index=False)
        return df

Exploratory Data Analysis¶

Cases¶

In [11]:
cases = vitaldb_dataframe_loader('cases')
cases = cases.set_index('caseid')
cases.shape
cases.csv exists locally.
Out[11]:
(6388, 73)
In [12]:
cases.index.nunique()
Out[12]:
6388
In [13]:
cases.head()
Out[13]:
subjectid casestart caseend anestart aneend opstart opend adm dis icu_days ... intraop_colloid intraop_ppf intraop_mdz intraop_ftn intraop_rocu intraop_vecu intraop_eph intraop_phe intraop_epi intraop_ca
caseid
1 5955 0 11542 -552 10848.0 1668 10368 -236220 627780 0 ... 0 120 0.0 100 70 0 10 0 0 0
2 2487 0 15741 -1039 14921.0 1721 14621 -221160 1506840 0 ... 0 150 0.0 0 100 0 20 0 0 0
3 2861 0 4394 -590 4210.0 1090 3010 -218640 40560 0 ... 0 0 0.0 0 50 0 0 0 0 0
4 1903 0 20990 -778 20222.0 2522 17822 -201120 576480 1 ... 0 80 0.0 100 100 0 50 0 0 0
5 4416 0 21531 -1009 22391.0 2591 20291 -67560 3734040 13 ... 0 0 0.0 0 160 0 10 900 0 2100

5 rows × 73 columns

In [14]:
cases['sex'].value_counts()
Out[14]:
sex
M    3243
F    3145
Name: count, dtype: int64

Tracks¶

In [15]:
trks = vitaldb_dataframe_loader('trks')
trks = trks.set_index('caseid')
trks.shape
trks.csv exists locally.
Out[15]:
(486449, 2)
In [16]:
trks.index.nunique()
Out[16]:
6388
In [17]:
trks.groupby('caseid')[['tid']].count().plot();
In [18]:
trks.groupby('caseid')[['tid']].count().hist();
In [19]:
trks.groupby('tname').count().sort_values(by='tid', ascending=False)
Out[19]:
tid
tname
Solar8000/HR 6387
Solar8000/PLETH_SPO2 6386
Solar8000/PLETH_HR 6386
Primus/CO2 6362
Primus/PAMB_MBAR 6361
... ...
Orchestra/AMD_VOL 1
Solar8000/ST_V5 1
Orchestra/NPS_VOL 1
Orchestra/AMD_RATE 1
Orchestra/VEC_VOL 1

196 rows × 1 columns

Parameters of Interest¶

Hemodynamic Parameters Reference¶

https://vitaldb.net/dataset/?query=overview#h.f7d712ycdpk2

SNUADC/ART

arterial blood pressure waveform

Parameter, Description, Type/Hz, Unit

SNUADC/ART, Arterial pressure wave, W/500, mmHg

In [20]:
trks[trks['tname'].str.contains('SNUADC/ART')].shape
Out[20]:
(3645, 2)

SNUADC/ECG_II

electrocardiogram waveform

Parameter, Description, Type/Hz, Unit

SNUADC/ECG_II, ECG lead II wave, W/500, mV

In [21]:
trks[trks['tname'].str.contains('SNUADC/ECG_II')].shape
Out[21]:
(6355, 2)

BIS/EEG1_WAV

electroencephalogram waveform

Parameter, Description, Type/Hz, Unit

BIS/EEG1_WAV, EEG wave from channel 1, W/128, uV

In [22]:
trks[trks['tname'].str.contains('BIS/EEG1_WAV')].shape
Out[22]:
(5871, 2)

Cases of Interest¶

These are the subset of case ids for which modelling and analysis will be performed based upon inclusion criteria and waveform data availability.

In [23]:
# TRACK NAMES is used for metadata analysis via API
TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV']
TRACK_SRATES = [500, 500, 128]
# EXTRACTION TRACK NAMES adds the EVENT track which is only used when doing actual file i/o
EXTRACTION_TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV', 'EVENT']
EXTRACTION_TRACK_SRATES = [500, 500, 128, 1]

As in the paper, select cases which meet the following criteria:

For patients, the inclusion criteria were as follows:

  1. adults (age >= 18)
  2. administered general anaesthesia
  3. undergone non-cardiac surgery.

For waveform data, the inclusion criteria were as follows:

  1. no missing monitoring for ABP, ECG, and EEG waveforms
  2. no cases containing false events or non-events due to poor signal quality (checked in second stage of data preprocessing)
In [24]:
# Adult
inclusion_1 = cases.loc[cases['age'] >= 18].index
print(f'{len(cases)-len(inclusion_1)} cases excluded, {len(inclusion_1)} remaining due to age criteria')

# General Anesthesia
inclusion_2 = cases.loc[cases['ane_type'] == 'General'].index
print(f'{len(cases)-len(inclusion_2)} cases excluded, {len(inclusion_2)} remaining due to anesthesia criteria')

# Non-cardiac surgery
inclusion_3 = cases.loc[
    ~cases['opname'].str.contains("cardiac", case=False)
    & ~cases['opname'].str.contains("aneurysmal", case=False)
].index
print(f'{len(cases)-len(inclusion_3)} cases excluded, {len(inclusion_3)} remaining due to non-cardiac surgery criteria')

# ABP, ECG, EEG waveforms
inclusion_4 = trks.loc[trks['tname'].isin(TRACK_NAMES)].index.value_counts()
inclusion_4 = inclusion_4[inclusion_4 == len(TRACK_NAMES)].index
print(f'{len(cases)-len(inclusion_4)} cases excluded, {len(inclusion_4)} remaining due to missing waveform data')

# SQI filter
# NOTE: this depends on a sqi_filter.csv generated by external processing
inclusion_5 = pd.read_csv('sqi_filter.csv', header=None, names=['caseid','sqi']).set_index('caseid').index
print(f'{len(cases)-len(inclusion_5)} cases excluded, {len(inclusion_5)} remaining due to SQI threshold not being met')

# Only include cases with known good waveforms.
exclusion_6 = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index
inclusion_6 = cases.index.difference(exclusion_6)
print(f'{len(cases)-len(inclusion_6)} cases excluded, {len(inclusion_6)} remaining due to malformed waveforms')

cases_of_interest_idx = inclusion_1 \
    .intersection(inclusion_2) \
    .intersection(inclusion_3) \
    .intersection(inclusion_4) \
    .intersection(inclusion_5) \
    .intersection(inclusion_6)

cases_of_interest = cases.loc[cases_of_interest_idx]

print()
print(f'{cases_of_interest_idx.shape[0]} out of {cases.shape[0]} total cases remaining after exclusions applied')

# Trim cases of interest to MAX_CASES
if MAX_CASES:
    cases_of_interest_idx = cases_of_interest_idx[:MAX_CASES]
print(f'{cases_of_interest_idx.shape[0]} cases of interest selected')
57 cases excluded, 6331 remaining due to age criteria
345 cases excluded, 6043 remaining due to anesthesia criteria
14 cases excluded, 6374 remaining due to non-cardiac surgery criteria
3019 cases excluded, 3369 remaining due to missing waveform data
0 cases excluded, 6388 remaining due to SQI threshold not being met
533 cases excluded, 5855 remaining due to malformed waveforms

2763 out of 6388 total cases remaining after exclusions applied
2763 cases of interest selected
In [25]:
cases_of_interest.head(n=5)
Out[25]:
subjectid casestart caseend anestart aneend opstart opend adm dis icu_days ... intraop_colloid intraop_ppf intraop_mdz intraop_ftn intraop_rocu intraop_vecu intraop_eph intraop_phe intraop_epi intraop_ca
caseid
1 5955 0 11542 -552 10848.0 1668 10368 -236220 627780 0 ... 0 120 0.0 100 70 0 10 0 0 0
4 1903 0 20990 -778 20222.0 2522 17822 -201120 576480 1 ... 0 80 0.0 100 100 0 50 0 0 0
7 5124 0 15770 477 14817.0 3177 14577 -154320 623280 3 ... 0 0 0.0 0 120 0 0 0 0 0
10 2175 0 20992 -1743 21057.0 2457 19857 -220740 3580860 1 ... 0 90 0.0 0 110 0 20 500 0 600
12 491 0 31203 -220 31460.0 5360 30860 -208500 1519500 4 ... 200 100 0.0 100 70 0 20 0 0 3300

5 rows × 73 columns

Note: In the original paper, the authors used an SQI measure they called jSQI but which appears to be jSQI + wSQI. We were not able to implement the same filter, so the inclusion of sqi_filter.csv simulates the inclusion of this filter. By not excluding cases where the SQI is below the threshold set by the authors, our dataset is noisier than that used by the original authors which will impact performance.

Tracks of Interest¶

These are the subset of tracks (waveforms) for the cases of interest identified above.

In [26]:
# A single case maps to one or more waveform tracks. Select only the tracks required for analysis.
trks_of_interest = trks.loc[cases_of_interest_idx][trks.loc[cases_of_interest_idx]['tname'].isin(TRACK_NAMES)]
trks_of_interest.shape
Out[26]:
(8289, 2)
In [27]:
trks_of_interest.head(n=5)
Out[27]:
tname tid
caseid
1 BIS/EEG1_WAV 0aa685df768489a18a5e9f53af0d83bf60890c73
1 SNUADC/ART 724cdd7184d7886b8f7de091c5b135bd01949959
1 SNUADC/ECG_II 8c9161aaae8cb578e2aa7b60f44234d98d2b3344
4 BIS/EEG1_WAV 1b4c2379be3397a79d3787dd810190150dc53f27
4 SNUADC/ART e28777c4706fe3a5e714bf2d91821d22d782d802
In [28]:
trks_of_interest_idx = trks_of_interest.set_index('tid').index
trks_of_interest_idx.shape
Out[28]:
(8289,)

Build Tracks Cache for Local Processing¶

Tracks data are large and therefore expensive to download every time used. By default, the .vital file format stores all tracks for each case internally. Since only select tracks per case are required, each .vital file can be further reduced by discarding the unused tracks.

In [29]:
# Ensure the full vital file dataset is available for cases of interest.
count_downloaded = 0
count_present = 0

#for i, idx in enumerate(cases.index):
for idx in cases_of_interest_idx:
    full_path = f'{VITAL_ALL}/{idx:04d}.vital'
    if not os.path.isfile(full_path):
        print(f'Missing vital file: {full_path}')
        # Download and save the file.
        vf = vitaldb.VitalFile(idx)
        vf.to_vital(full_path)
        count_downloaded += 1
    else:
        count_present += 1

print()
print(f'Count of cases of interest:           {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files downloaded:      {count_downloaded}')
print(f'Count of vital files already present: {count_present}')
Count of cases of interest:           2763
Count of vital files downloaded:      0
Count of vital files already present: 2763

Validate Mini Files¶

Validate the minified .vital files and check that all of the required data tracks are present. The Vital API does not throw an error when you request a track that does not exist.

In [30]:
# Convert vital files to "mini" versions including only the subset of tracks defined in TRACK_NAMES above.
# Only perform conversion for the cases of interest.
# NOTE: If this cell is interrupted, it can be restarted and will continue where it left off.
count_minified = 0
count_present = 0
count_missing_tracks = 0
count_not_fixable = 0

# If set to true, local mini files are checked for all tracks even if the mini file is already present.
FORCE_VALIDATE = False

for idx in cases_of_interest_idx:
    full_path = f'{VITAL_ALL}/{idx:04d}.vital'
    mini_path = f'{VITAL_MINI}/{idx:04d}_mini.vital'

    if FORCE_VALIDATE or not os.path.isfile(mini_path):
        print(f'Creating mini vital file: {idx}')
        vf = vitaldb.VitalFile(full_path, EXTRACTION_TRACK_NAMES)
        
        if len(vf.get_track_names()) != 4:
            print(f'Missing track in vital file: {idx}, {set(EXTRACTION_TRACK_NAMES).difference(set(vf.get_track_names()))}')
            count_missing_tracks += 1
            
            # Attempt to download from VitalDB directly and see if missing tracks are present.
            vf = vitaldb.VitalFile(idx, EXTRACTION_TRACK_NAMES)
            
            if len(vf.get_track_names()) != 4:
                print(f'Unable to fix missing tracks: {idx}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[0], 1/EXTRACTION_TRACK_SRATES[0]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[0]}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[1], 1/EXTRACTION_TRACK_SRATES[1]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[1]}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[2], 1/EXTRACTION_TRACK_SRATES[2]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[2]}')
                count_not_fixable += 1
                continue

            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[3], 1/EXTRACTION_TRACK_SRATES[3]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[3]}')
                count_not_fixable += 1
                continue

        vf.to_vital(mini_path)
        count_minified += 1
    else:
        count_present += 1

print()
print(f'Count of cases of interest:           {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files minified:        {count_minified}')
print(f'Count of vital files already present: {count_present}')
print(f'Count of vital files missing tracks:  {count_missing_tracks}')
print(f'Count of vital files not fixable:     {count_not_fixable}')
Count of cases of interest:           2763
Count of vital files minified:        0
Count of vital files already present: 2763
Count of vital files missing tracks:  0
Count of vital files not fixable:     0

Filtering¶

As in the original paper, preprocessing characteristics are different for each of the three signal categories:

  • ABP: no preprocessing, use as-is
  • ECG: apply a 1-40Hz bandpass filter, then perform Z-score normalization
  • EEG: apply a 0.5-50Hz bandpass filter

apply_bandpass_filter() implements the bandpass filter using scipy.signal

In [31]:
def apply_bandpass_filter(data, lowcut, highcut, fs, order=5):
    b, a = butter(order, [lowcut, highcut], fs=fs, btype='band')
    y = lfilter(b, a, np.nan_to_num(data))
    return y

apply_zscore_normalization() implements the Z-score normalization using numpy

In [32]:
def apply_zscore_normalization(signal):
    mean = np.nanmean(signal)
    std = np.nanstd(signal)
    return (signal - mean) / std

Filtering demonstration¶

Demonstrate effects of the filters with pre/post filtering waveforms on a sample case:

In [33]:
caseidx = 1
file_path = f"{VITAL_MINI}/{caseidx:04d}_mini.vital"
vf = vitaldb.VitalFile(file_path, TRACK_NAMES)

originalAbp = None
filteredAbp = None
originalEcg = None
filteredEcg = None
originalEeg = None
filteredEeg = None

ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"

for i, (track_name, rate) in enumerate(zip(TRACK_NAMES, TRACK_SRATES)):
    # Get samples for this track
    track_samples = vf.get_track_samples(track_name, 1/rate)
    #track_samples, _ = vf.get_samples(track_name, 1/rate)
    print(f"Track {track_name} @ {rate}Hz shape {len(track_samples)}")

    if track_name == ABP_TRACK_NAME:
        # ABP waveforms are used without further pre-processing
        originalAbp = track_samples
        filteredAbp = track_samples
    elif track_name == ECG_TRACK_NAME:
        originalEcg = track_samples
        # ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
        # first apply bandpass filter
        filteredEcg = apply_bandpass_filter(track_samples, 1, 40, rate)
        # then do z-score normalization
        filteredEcg = apply_zscore_normalization(filteredEcg)
    elif track_name == EEG_TRACK_NAME:
        # EEG waveforms are band-pass filtered between 0.5 and 50 Hz
        originalEeg = track_samples
        filteredEeg = apply_bandpass_filter(track_samples, 0.5, 50, rate, 2)

def plotSignal(data, title):
    plt.figure(figsize=(20, 5))
    plt.plot(data)
    plt.title(title)
    plt.show()

plotSignal(originalAbp, "Original ABP")
plotSignal(originalAbp, "Unfiltered ABP")
plotSignal(originalEcg, "Original ECG")
plotSignal(filteredEcg, "Filtered ECG")
plotSignal(originalEeg, "Original EEG")
plotSignal(filteredEeg, "Filtered EEG")
Track SNUADC/ART @ 500Hz shape 5771049
Track SNUADC/ECG_II @ 500Hz shape 5771049
Track BIS/EEG1_WAV @ 128Hz shape 1477389

Perform data preprocessing¶

This section performs the actual data preprocessing laid out earlier:

In [34]:
# Preprocess data tracks
ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"
EVENT_TRACK_NAME = "EVENT"
MINI_FILE_FOLDER = VITAL_MINI
CACHE_FILE_FOLDER = VITAL_PREPROCESS_SCRATCH

if RESET_CACHE:
    TRACK_CACHE = None
    SEGMENT_CACHE = None

if TRACK_CACHE is None:
    TRACK_CACHE = {}
    SEGMENT_CACHE = {}

def get_track_data(case, print_when_file_loaded = False):
    parsedFile = None
    abp = None
    eeg = None
    ecg = None
    events = None

    for i, (track_name, rate) in enumerate(zip(EXTRACTION_TRACK_NAMES, EXTRACTION_TRACK_SRATES)):
        # use integer case id and track name, delimited by pipe, as cache key
        cache_label = f"{case}|{track_name}"
        
        if cache_label not in TRACK_CACHE:
            if parsedFile is None:
                file_path = f"{MINI_FILE_FOLDER}/{case:04d}_mini.vital"
                if print_when_file_loaded:
                    print(f"[{datetime.now()}] Loading vital file {file_path}")
                parsedFile = vitaldb.VitalFile(file_path, EXTRACTION_TRACK_NAMES)
            
            dataset = np.array(parsedFile.get_track_samples(track_name, 1/rate))
            
            if track_name == ABP_TRACK_NAME:
                # no filtering for ABP
                abp = dataset
                abp = pd.DataFrame(abp).ffill(axis=0).bfill(axis=0)[0].values
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = abp
            elif track_name == ECG_TRACK_NAME:
                ecg = dataset
                # apply ECG filtering: first bandpass then do z-score normalization
                ecg = pd.DataFrame(ecg).ffill(axis=0).bfill(axis=0)[0].values
                ecg = apply_bandpass_filter(ecg, 1, 40, rate, 2)
                ecg = apply_zscore_normalization(ecg)
                
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = ecg
            elif track_name == EEG_TRACK_NAME:
                eeg = dataset
                eeg = pd.DataFrame(eeg).ffill(axis=0).bfill(axis=0)[0].values
                # apply EEG filtering: bandpass only
                eeg = apply_bandpass_filter(eeg, 0.5, 50, rate, 2)
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = eeg
            elif track_name == EVENT_TRACK_NAME:
                events = dataset
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = events
        else:
            # cache hit, pull from cache
            if track_name == ABP_TRACK_NAME:
                abp = TRACK_CACHE[cache_label]
            elif track_name == ECG_TRACK_NAME:
                ecg = TRACK_CACHE[cache_label]
            elif track_name == EEG_TRACK_NAME:
                eeg = TRACK_CACHE[cache_label]
            elif track_name == EVENT_TRACK_NAME:
                events = TRACK_CACHE[cache_label]

    return (abp, ecg, eeg, events)

# ABP waveforms are used without further pre-processing
# ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
# EEG waveforms are band-pass filtered between 0.5 and 50 Hz
if PRELOADING_CASES:
    # determine disk cache file label
    maxlabel = "ALL"
    if MAX_CASES is not None:
        maxlabel = str(MAX_CASES)
    picklefile = f"{CACHE_FILE_FOLDER}/{PREDICTION_WINDOW}_minutes_MAX{maxlabel}.trackcache"

    for track in tqdm(cases_of_interest_idx):
        # getting track data will cause a cache-check and fill when missing
        # will also apply appropriate filtering per track
        get_track_data(track, False)
    
    print(f"Generated track cache, {len(TRACK_CACHE)} records generated")

Processed data is stored in .h5 files. Define a loader to read this data and return a tuple with the waveform data:

In [35]:
def get_segment_data(file_path):
    abp = None
    eeg = None
    ecg = None

    if USE_MEMORY_CACHING:
        if file_path in SEGMENT_CACHE:
            (abp, ecg, eeg) = SEGMENT_CACHE[file_path]
            return (abp, ecg, eeg)

    try:
        with h5py.File(file_path, 'r') as f:
            abp = np.array(f['abp'])
            ecg = np.array(f['ecg'])
            eeg = np.array(f['eeg'])
        
        abp = np.array(abp)
        eeg = np.array(eeg)
        ecg = np.array(ecg)

        if len(abp) > 30000:
            abp = abp[:30000]
        elif len(abp) < 30000:
            abp = np.resize(abp, (30000))

        if len(ecg) > 30000:
            ecg = ecg[:30000]
        elif len(ecg) < 30000:
            ecg = np.resize(ecg, (30000))

        if len(eeg) > 7680:
            eeg = eeg[:7680]
        elif len(eeg) < 7680:
            eeg = np.resize(eeg, (7680))

        if USE_MEMORY_CACHING:
            SEGMENT_CACHE[file_path] = (abp, ecg, eeg)
    except:
        abp = None
        ecg = None
        eeg = None

    return (abp, ecg, eeg)

The .vital files contain timeseries information before and after the surgery starts, and include a label start where significant events can be indicated. Define a function to read from this track and extract surgery start and end times so that data can be extracted from this period:

In [36]:
def getSurgeryBoundariesInSeconds(event, debug=False):
    eventIndices = np.argwhere(event==event)
    # we are looking for the last index where the string contains 'start
    lastStart = 0
    firstFinish = len(event)-1
    
    # find last start
    for idx in eventIndices:
        if 'started' in event[idx[0]]:
            if debug:
                print(event[idx[0]])
                print(idx[0])
            lastStart = idx[0]
    
    # find first finish
    for idx in eventIndices:
        if 'finish' in event[idx[0]]:
            if debug:
                print(event[idx[0]])
                print(idx[0])

            firstFinish = idx[0]
            break
    
    if debug:
        print(f'lastStart, firstFinish: {lastStart}, {firstFinish}')
    return (lastStart, firstFinish)

Define a function to check if there are extracted segments for this case. If they are not, they will need to be generated:

In [37]:
def areCaseSegmentsCached(caseid):
    seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
    return os.path.exists(seg_folder) and len(os.listdir(seg_folder)) > 0

Define a basic signal quality check function for ABP data:

In [38]:
def isAbpSegmentValidNumpy(samples, debug=False):
    valid = True
    if np.isnan(samples).mean() > 0.1:
        valid = False
        if debug:
            print(f">10% NaN")
    elif (samples > 200).any():
        valid = False
        if debug:
            print(f"Presence of BP > 200")
    elif (samples < 30).any():
        valid = False
        if debug:
            print(f"Presence of BP < 30")
    elif np.max(samples) - np.min(samples) < 30:
        if debug:
            print(f"Max - Min test < 30")
        valid = False
    elif (np.abs(np.diff(samples)) > 30).any():  # abrupt change -> noise
        if debug:
            print(f"Abrupt change (noise)")
        valid = False
    
    return valid

Check if the ABP data extracted for a case is valid:

In [39]:
def isAbpSegmentValid(vf, debug=False):
    ABP_ECG_SRATE_HZ = 500
    ABP_TRACK_NAME = "SNUADC/ART"

    samples = np.array(vf.get_track_samples(ABP_TRACK_NAME, 1/ABP_ECG_SRATE_HZ))
    return isAbpSegmentValidNumpy(samples, debug)

Save extracted segments to disk. Use an .h5 format for efficient packing and playback.

In [40]:
def saveCaseSegments(caseid, positiveSegments, negativeSegments, compresslevel=9, debug=False, forceWrite=False):
    if len(positiveSegments) == 0 and len(negativeSegments) == 0:
        # exit early if no events found
        print(f'{caseid}: exit early, no segments to save')
        return

    # event composition
    # predictiveSegmentStart in seconds, predictiveSegmentEnd in seconds, predWindow (0 for negative), abp, ecg, eeg)
    # 0start, 1end, 2predwindow, 3abp, 4ecg, 5eeg

    seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
    if not os.path.exists(seg_folder):
        # if directory needs to be created, then there are no cached segments
        os.mkdir(seg_folder)
    else:
        if not forceWrite:
            # exit early if folder already exists, case already produced
            return

    # prior to writing files out, clear existing files
    for filename in os.listdir(seg_folder):
        file_path = os.path.join(seg_folder, filename)
        if debug:
            print(f'deleting: {file_path}')
        try:
            if os.path.isfile(file_path):
                os.unlink(file_path)
        except Exception as e:
            print('Failed to delete %s. Reason: %s' % (file_path, e))
    
    count_pos_saved = 0
    for i in range(0, len(positiveSegments)):
        event = positiveSegments[i]
        startIndex = event[0]
        endIndex = event[1]
        predWindow = event[2]
        abp = event[3]
        #ecg = event[4]
        #eeg = event[5]

        seg_filename = f"{caseid:04d}_{startIndex}_{predWindow:02d}_True.h5"
        seg_fullpath = f"{seg_folder}/{seg_filename}"
        if isAbpSegmentValidNumpy(abp, debug):
            count_pos_saved += 1

            abp = abp.tolist()
            ecg = event[4].tolist()
            eeg = event[5].tolist()
        
            f = h5py.File(seg_fullpath, "w")
            f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
            
            f.flush()
            f.close()
            f = None

            abp = None
            ecg = None
            eeg = None

            # f.create_dataset('label', data=[1], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('pred_window', data=[event[2]], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
        elif debug:
            print(f"{caseid:04d} {predWindow:02d}min {startIndex} starttime = ignored, segment validity issues")

    count_neg_saved = 0
    for i in range(0, len(negativeSegments)):
        event = negativeSegments[i]
        startIndex = event[0]
        endIndex = event[1]
        predWindow = event[2]
        abp = event[3]
        #ecg = event[4]
        #eeg = event[5]

        seg_filename = f"{caseid:04d}_{startIndex}_0_False.h5"
        seg_fullpath = f"{seg_folder}/{seg_filename}"
        if isAbpSegmentValidNumpy(abp, debug):
            count_neg_saved += 1

            abp = abp.tolist()
            ecg = event[4].tolist()
            eeg = event[5].tolist()
            
            f = h5py.File(seg_fullpath, "w")
            f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
            
            f.flush()
            f.close()
            f = None

            abp = None
            ecg = None
            eeg = None

            # f.create_dataset('label', data=[0], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('pred_window', data=[0], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
        elif debug:
            print(f"{caseid:04d} CleanWindow {startIndex} starttime = ignored, segment validity issues")
            
    if count_neg_saved == 0 and count_pos_saved == 0:
        print(f'{caseid}: nothing saved, all segments filtered')

The following method is adapted from the preprocessing block of reference [6] (https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb)

The approach first finds an interoperative hypotensive event in the ABP waveform. It then backtracks to earlier in the waveform to extract a 60 second segment representing the waveform feature to use as model input. The figure below shows an example of this approach and is reproduced from the VitalDB example notebook referenced above.

Feature segment extraction

Generate hypotensive events

Hypotensive events are defined as a 1-minute interval with sustained ABP of less than 65 mmHg Note: Hypotensive events should be at least 20 minutes apart to minimize potential residual effects from previous events

Generate hypotension non-events

To sample non-events, 30-minute segments where the ABP was above 75 mmHG were selected, and then three one-minute samples of each waveform were obtained from the middle of the segment both occur in extract_segments

In [41]:
def extract_segments(
    cases_of_interest_idx,
    debug=False,
    checkCache=True,
    forceWrite=False,
    returnSegments=False,
    skipInvalidCleanEvents=False,
    skipInvalidIohEvents=False,
):
    # Sampling rate for ABP and ECG, Hz. These rates should be the same. Default = 500
    ABP_ECG_SRATE_HZ = 500

    # Sampling rate for EEG. Default = 128
    EEG_SRATE_HZ = 128

    # Final dataset for training and testing the model.
    positiveSegmentsMap = {}
    negativeSegmentsMap = {}
    iohEventsMap = {}
    cleanEventsMap = {}

    # Process each case and extract segments. For each segment identify presence of an event in the label zone.
    count_cases = len(cases_of_interest_idx)

    #for case_count, caseid in tqdm(enumerate(cases_of_interest_idx), total=count_cases):
    for case_count, caseid in enumerate(cases_of_interest_idx):
        if debug:
            print(f'Loading case: {caseid:04d}, ({case_count + 1} of {count_cases})')

        if checkCache and areCaseSegmentsCached(caseid):
            if debug:
                print(f'Skipping case: {caseid:04d}, already cached')
            # skip records we've already cached
            continue

        # read the arterial waveform
        (abp, ecg, eeg, event) = get_track_data(caseid)
        if debug:
            print(f'Length of {TRACK_NAMES[0]}:       {abp.shape[0]}')
            print(f'Length of {TRACK_NAMES[1]}:    {ecg.shape[0]}')
            print(f'Length of {TRACK_NAMES[2]}:     {eeg.shape[0]}')

        (startInSeconds, endInSeconds) = getSurgeryBoundariesInSeconds(event)
        if debug:
            print(f"Event markers indicate that surgery begins at {startInSeconds}s and ends at {endInSeconds}s.")

        #track_length_seconds = int(len(abp) / ABP_ECG_SRATE_HZ)
        track_length_seconds = endInSeconds
        
        if debug:
            print(f"Processing case {caseid} with length {track_length_seconds}s")

        
        # check if the ABP segment in the surgery window is valid
        if debug:
            isSurgerySegmentValid = \
                isAbpSegmentValidNumpy(abp[startInSeconds * ABP_ECG_SRATE_HZ:endInSeconds * ABP_ECG_SRATE_HZ])
            print(f'{caseid}: surgery segment valid: {isSurgerySegmentValid}')
        
        iohEvents = []
        cleanEvents = []
        i = 0
        started = False
        eofReached = False
        trackStartIndex = None

        # set i pointer (which operates in seconds) to start marker for surgery
        i = startInSeconds

        # FIRST PASS
        # in the first forward pass, we are going to identify the start/end boundaries of all IOH events within the case
        ioh_events_valid = []
        
        while i < track_length_seconds - 60 and i < endInSeconds:
            segmentStart = None
            segmentEnd = None
            segFound = False

            # look forward one minute
            abpSeg = abp[i * ABP_ECG_SRATE_HZ:(i + 60) * ABP_ECG_SRATE_HZ]

            # roll forward until we hit a one minute window where mean ABP >= 65 so we know leads are connected and it's tracking
            if not started:
                if np.nanmean(abpSeg) >= 65:
                    started = True
                    trackStartIndex = i
            # if we're started and mean abp for the window is <65, we are starting a new IOH event
            elif np.nanmean(abpSeg) < 65:
                segmentStart = i
                # now seek forward to find end of event, perpetually checking the lats minute of the IOH event
                for j in range(i + 60, track_length_seconds):
                    # look backward one minute
                    abpSegForward = abp[(j - 60) * ABP_ECG_SRATE_HZ:j * ABP_ECG_SRATE_HZ]
                    if np.nanmean(abpSegForward) >= 65:
                        segmentEnd = j - 1
                        break
                if segmentEnd is None:
                    eofReached = True
                else:
                    # otherwise, end of the IOH segment has been reached, record it
                    iohEvents.append((segmentStart, segmentEnd))
                    segFound = True
                    
                    if skipInvalidIohEvents:
                        isIohSegmentValid = isAbpSegmentValidNumpy(abpSeg)
                        ioh_events_valid.append(isIohSegmentValid)
                        if debug:
                            print(f'{caseid}: ioh segment valid: {isIohSegmentValid}, {segmentStart}, {segmentEnd}, {t_abp.shape}')
                    else:
                        ioh_events_valid.append(True)

            i += 1
            if not started:
                continue
            elif eofReached:
                break
            elif segFound:
                i = segmentEnd + 1

        # SECOND PASS
        # in the second forward pass, we are going to identify the start/end boundaries of all non-overlapping 30 minute "clean" windows
        # reuse the 'start of signal' index from our first pass
        if trackStartIndex is None:
            trackStartIndex = startInSeconds
        i = trackStartIndex
        eofReached = False

        clean_events_valid = []
        
        while i < track_length_seconds - 1800 and i < endInSeconds:
            segmentStart = None
            segmentEnd = None
            segFound = False

            startIndex = i
            endIndex = i + 1800

            # check to see if this 30 minute window overlaps any IOH events, if so ffwd to end of latest overlapping IOH
            overlapFound = False
            latestEnd = None
            for event in iohEvents:
                # case 1: starts during an event
                if startIndex >= event[0] and startIndex < event[1]:
                    latestEnd = event[1]
                    overlapFound = True
                # case 2: ends during an event
                elif endIndex >= event[0] and endIndex < event[1]:
                    latestEnd = event[1]
                    overlapFound = True
                # case 3: event occurs entirely inside of the window
                elif startIndex < event[0] and endIndex > event[1]:
                    latestEnd = event[1]
                    overlapFound = True

            # FFWD if we found an overlap
            if overlapFound:
                i = latestEnd + 1
                continue

            # look forward 30 minutes
            abpSeg = abp[startIndex * ABP_ECG_SRATE_HZ:endIndex * ABP_ECG_SRATE_HZ]

            # if we're started and mean abp for the window is >= 75, we are starting a new clean event
            if np.nanmean(abpSeg) >= 75:
                overlapFound = False
                latestEnd = None
                for event in iohEvents:
                    # case 1: starts during an event
                    if startIndex >= event[0] and startIndex < event[1]:
                        latestEnd = event[1]
                        overlapFound = True
                    # case 2: ends during an event
                    elif endIndex >= event[0] and endIndex < event[1]:
                        latestEnd = event[1]
                        overlapFound = True
                    # case 3: event occurs entirely inside of the window
                    elif startIndex < event[0] and endIndex > event[1]:
                        latestEnd = event[1]
                        overlapFound = True

                if not overlapFound:
                    segFound = True
                    segmentEnd = endIndex
                    cleanEvents.append((startIndex, endIndex))
                    
                    if skipInvalidCleanEvents:
                        isCleanSegmentValid = isAbpSegmentValidNumpy(abpSeg)
                        clean_events_valid.append(isCleanSegmentValid)
                        if debug:
                            print(f'{caseid}: clean segment valid: {isCleanSegmentValid}, {startIndex}, {endIndex}, {abpSeg.shape}')
                    else:
                        clean_events_valid.append(True)

            i += 10
            if segFound:
                i = segmentEnd + 1

        if debug:
            print(f"IOH Events for case {caseid}: {iohEvents}")
            print(f"Clean Events for case {caseid}: {cleanEvents}")

        positiveSegments = []
        negativeSegments = []

        # THIRD PASS
        # in the third pass, we will use the collections of ioh event windows to generate our actual extracted segments based on our prediction window (positive labels)
        for i in range(0, len(iohEvents)):
            # Don't extract segments from invalid IOH event windows.
            if not ioh_events_valid[i]:
                continue

            if debug:
                print(f"Checking event {iohEvents[i]}")
            # we want to review current event boundaries, as well as previous event boundaries if available
            event = iohEvents[i]
            previousEvent = None
            if i > 0:
                previousEvent = iohEvents[i - 1]

            for predWindow in ALL_PREDICTION_WINDOWS:
                if debug:
                    print(f"Checking event {iohEvents[i]} for pred {predWindow}")
                iohEventStart = event[0]
                predictiveSegmentEnd = event[0] - (predWindow*60)
                predictiveSegmentStart = predictiveSegmentEnd - 60

                if (predictiveSegmentStart < 0):
                    # don't rewind before the beginning of the track
                    if debug:
                        print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before beginning")
                    continue
                elif (predictiveSegmentStart < trackStartIndex):
                    # don't rewind before the beginning of signal in track
                    if debug:
                        print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before track start")
                    continue
                elif previousEvent is not None:
                    # does this event window come before or during the previous event?
                    overlapFound = False
                    # case 1: starts during an event
                    if predictiveSegmentStart >= previousEvent[0] and predictiveSegmentStart < previousEvent[1]:
                        overlapFound = True
                    # case 2: ends during an event
                    elif iohEventStart >= previousEvent[0] and iohEventStart < previousEvent[1]:
                        overlapFound = True
                    # case 3: event occurs entirely inside of the window
                    elif predictiveSegmentStart < previousEvent[0] and iohEventStart > previousEvent[1]:
                        overlapFound = True
                    # do not extract a case if we overlap witha nother IOH
                    if overlapFound:
                        if debug:
                            print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, overlap with earlier segment")
                        continue

                # track the positive segment
                positiveSegments.append((predictiveSegmentStart, predictiveSegmentEnd, predWindow,
                    abp[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
                    ecg[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
                    eeg[predictiveSegmentStart*EEG_SRATE_HZ:predictiveSegmentEnd*EEG_SRATE_HZ]))

        # FOURTH PASS
        # in the fourth and final pass, we will use the collections of clean event windows to generate our actual extracted segments based (negative labels)
        for i in range(0, len(cleanEvents)):
            # Don't extract segments from invalid clean event windows.
            if not clean_events_valid[i]:
                continue
            
            # everything will be 30 minutes long at least
            event = cleanEvents[i]
            # choose sample 1 @ 10 minutes
            # choose sample 2 @ 15 minutes
            # choose sample 3 @ 20 minutes
            timeAtTen = event[0] + 600
            timeAtFifteen = event[0] + 900
            timeAtTwenty = event[0] + 1200

            negativeSegments.append((timeAtTen, timeAtTen + 60, 0,
                                   abp[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtTen*EEG_SRATE_HZ:(timeAtTen + 60)*EEG_SRATE_HZ]))
            negativeSegments.append((timeAtFifteen, timeAtFifteen + 60, 0,
                                   abp[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtFifteen*EEG_SRATE_HZ:(timeAtFifteen + 60)*EEG_SRATE_HZ]))
            negativeSegments.append((timeAtTwenty, timeAtTwenty + 60, 0,
                                   abp[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtTwenty*EEG_SRATE_HZ:(timeAtTwenty + 60)*EEG_SRATE_HZ]))

        if returnSegments:
            positiveSegmentsMap[caseid] = positiveSegments
            negativeSegmentsMap[caseid] = negativeSegments
            iohEventsMap[caseid] = iohEvents
            cleanEventsMap[caseid] = cleanEvents
        
        saveCaseSegments(caseid, positiveSegments, negativeSegments, 9, debug=debug, forceWrite=forceWrite)

        #if debug:
        print(f'{caseid}: positiveSegments: {len(positiveSegments)}, negativeSegments: {len(negativeSegments)}')

    return positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap

Case Extraction - Generage Segments Needed For Training¶

Ensure that all needed segments are in place for the cases that are being used. If data is already stored on disk this method returns immediately.

In [42]:
MANUAL_EXTRACT=True
SKIP_INVALID_CLEAN_EVENTS=True
SKIP_INVALID_IOH_EVENTS=True

if MANUAL_EXTRACT:
    mycoi = cases_of_interest_idx
    #mycoi = cases_of_interest_idx[:2800]
    #mycoi = [1]

    cnt = 0
    mod = 0
    for ci in mycoi:
        cnt += 1
        if mod % 100 == 0:
            print(f'count processed: {mod}, current case index: {ci}')
        try:
            p, n, i, c = extract_segments([ci], debug=False, checkCache=True, 
                                          forceWrite=True, returnSegments=False, 
                                          skipInvalidCleanEvents=SKIP_INVALID_CLEAN_EVENTS,
                                          skipInvalidIohEvents=SKIP_INVALID_IOH_EVENTS)
            p = None
            n = None
            i = None
            c = None
        except:
            print(f'error on extract segment: {ci}')
        mod += 1
    print(f'extracted: {cnt}')
count processed: 0, current case index: 1
count processed: 100, current case index: 229
268: exit early, no segments to save
268: positiveSegments: 0, negativeSegments: 0
count processed: 200, current case index: 481
641: exit early, no segments to save
641: positiveSegments: 0, negativeSegments: 0
count processed: 300, current case index: 740
count processed: 400, current case index: 954
count processed: 500, current case index: 1160
count processed: 600, current case index: 1367
count processed: 700, current case index: 1595
1600: exit early, no segments to save
1600: positiveSegments: 0, negativeSegments: 0
count processed: 800, current case index: 1822
count processed: 900, current case index: 2055
2158: exit early, no segments to save
2158: positiveSegments: 0, negativeSegments: 0
2224: exit early, no segments to save
2224: positiveSegments: 0, negativeSegments: 0
count processed: 1000, current case index: 2317
2413: exit early, no segments to save
2413: positiveSegments: 0, negativeSegments: 0
count processed: 1100, current case index: 2533
count processed: 1200, current case index: 2775
count processed: 1300, current case index: 3014
3112: exit early, no segments to save
3112: positiveSegments: 0, negativeSegments: 0
count processed: 1400, current case index: 3218
count processed: 1500, current case index: 3442
3596: exit early, no segments to save
3596: positiveSegments: 0, negativeSegments: 0
3648: exit early, no segments to save
3648: positiveSegments: 0, negativeSegments: 0
count processed: 1600, current case index: 3682
3868: exit early, no segments to save
3868: positiveSegments: 0, negativeSegments: 0
count processed: 1700, current case index: 3879
count processed: 1800, current case index: 4109
count processed: 1900, current case index: 4347
4485: exit early, no segments to save
4485: positiveSegments: 0, negativeSegments: 0
count processed: 2000, current case index: 4603
count processed: 2100, current case index: 4830
count processed: 2200, current case index: 5072
count processed: 2300, current case index: 5314
count processed: 2400, current case index: 5568
5782: exit early, no segments to save
5782: positiveSegments: 0, negativeSegments: 0
count processed: 2500, current case index: 5793
5871: exit early, no segments to save
5871: positiveSegments: 0, negativeSegments: 0
count processed: 2600, current case index: 6017
count processed: 2700, current case index: 6248
6331: exit early, no segments to save
6331: positiveSegments: 0, negativeSegments: 0
extracted: 2763

Track and Segment Validity Checks¶

In [43]:
def printAbp(case_id_to_check, plot_invalid_only=False):
        vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
        
        if not os.path.isfile(vf_path):
              return
        
        vf = vitaldb.VitalFile(vf_path)
        abp = vf.to_numpy(TRACK_NAMES[0], 1/500)
        
        print(f'Case {case_id_to_check}')
        print(f'ABP Shape: {abp.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        if plot_invalid_only and is_valid:
            return
            
        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(abp, plt_color)
        plt.title(f'ABP - Entire Track - Case {case_id_to_check} - {abp.shape[0] / 500} seconds')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()
In [44]:
def printSegments(segmentsMap, case_id_to_check, print_label, normalize=False):
    for (x1, x2, r, abp, ecg, eeg) in segmentsMap[case_id_to_check]:
        print(f'{print_label}: Case {case_id_to_check}')
        print(f'lookback window: {r} min')
        print(f'start time: {x1}')
        print(f'end time: {x2}')
        print(f'length: {x2 - x1} sec')
        
        print(f'ABP Shape: {abp.shape}')
        print(f'ECG Shape: {ecg.shape}')
        print(f'EEG Shape: {eeg.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        # ABP normalization
        x_abp = np.copy(abp)
        if normalize:
            x_abp -= 65
            x_abp /= 65

        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(x_abp, plt_color)
        plt.title('ABP')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()

        plt.figure(figsize=(20, 5))
        plt.plot(ecg, 'teal')
        plt.title('ECG')
        plt.show()

        plt.figure(figsize=(20, 5))
        plt.plot(eeg, 'indigo')
        plt.title('EEG')
        plt.show()

        print()
In [45]:
def printEvents(abp_raw, eventsMap, case_id_to_check, print_label, normalize=False):
    for (x1, x2) in eventsMap[case_id_to_check]:
        print(f'{print_label}: Case {case_id_to_check}')
        print(f'start time: {x1}')
        print(f'end time: {x2}')
        print(f'length: {x2 - x1} sec')

        abp = abp_raw[x1*500:x2*500]
        print(f'ABP Shape: {abp.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        # ABP normalization
        x_abp = np.copy(abp)
        if normalize:
            x_abp -= 65
            x_abp /= 65

        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(x_abp, plt_color)
        plt.title('ABP')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()

        print()
In [46]:
def moving_average(x, seconds=60):
    w = seconds * 500
    return np.convolve(np.squeeze(x), np.ones(w), 'valid') / w
In [47]:
def printAbpOverlay(
    case_id_to_check,
    positiveSegmentsMap,
    negativeSegmentsMap,
    iohEventsMap,
    cleanEventsMap,
    movingAverage=False
):
    def overlay_segments(plt, segmentsMap, color, linestyle, positive=False):
        for (x1, x2, r, abp, ecg, eeg) in segmentsMap:
            sx1 = x1*500
            sx2 = x2*500
            mycolor = color
            if positive:
                if r == 3:
                    mycolor = 'red'
                elif r == 5:
                    mycolor = 'crimson'
                elif r == 10:
                    mycolor = 'tomato'
                else:
                    mycolor = 'salmon'
            plt.axvline(x = sx1, color = mycolor, linestyle = linestyle)
            plt.axvline(x = sx2, color = mycolor, linestyle = linestyle)
            plt.axvspan(sx1, sx2, facecolor = mycolor, alpha = 0.1)

    def overlay_events(plt, abp, eventsMap, opstart, opend, color, linestyle):
        for (x1, x2) in eventsMap:
            sx1 = x1*500
            sx2 = x2*500
            # only plot valid events
            if isAbpSegmentValidNumpy(abp[sx1:sx2]):
                # that are within the operating start and end times
                if sx1 >= opstart and sx2 <= opend:
                    plt.axvline(x = sx1, color = color, linestyle = linestyle)
                    plt.axvline(x = sx2, color = color, linestyle = linestyle)
                    plt.axvspan(sx1, sx2, facecolor = color, alpha = 0.1)

    vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'

    if not os.path.isfile(vf_path):
          return

    vf = vitaldb.VitalFile(vf_path)
    abp = vf.to_numpy(TRACK_NAMES[0], 1/500)

    print(f'Case {case_id_to_check}')
    print(f'ABP Shape: {abp.shape}')

    print(f'nanmin: {np.nanmin(abp)}')
    print(f'nanmean: {np.nanmean(abp)}')
    print(f'nanmax: {np.nanmax(abp)}')

    #is_valid = isAbpSegmentValidNumpy(abp, debug=True)
    #print(f'valid: {is_valid}')

    plt.figure(figsize=(24, 8))
    plt_color = 'C0' #if is_valid else 'red'
    plt.plot(abp, plt_color)
    plt.title(f'ABP - Entire Track - Case {case_id_to_check} - {abp.shape[0] / 500} seconds')
    plt.axhline(y = 65, color = 'maroon', linestyle = '--')

    # https://matplotlib.org/stable/gallery/lines_bars_and_markers/linestyles.html#linestyles
    
    opstart = cases.loc[case_id_to_check]['opstart'].item() * 500
    plt.axvline(x = opstart, color = 'black', linestyle = '--', linewidth=2)
    plt.text(opstart - 600000, -200, f'Operation Start', fontsize=15)
    
    opend = cases.loc[case_id_to_check]['opend'].item() * 500
    plt.axvline(x = opend, color = 'black', linestyle = '--', linewidth=2)
    plt.text(opend + 50000, -200, r'Operation End', fontsize=15)
    
    overlay_segments(plt, positiveSegmentsMap[case_id_to_check], 'crimson', (0, (1, 1)), positive=True)
    
    overlay_segments(plt, negativeSegmentsMap[case_id_to_check], 'teal', (0, (1, 1)))

    overlay_events(plt, abp, iohEventsMap[case_id_to_check], opstart, opend, 'brown', '-')
    
    overlay_events(plt, abp, cleanEventsMap[case_id_to_check], opstart, opend, 'teal', '-')
    
    abp_mov_avg = None
    if movingAverage:
        abp_mov_avg = moving_average(abp[opstart:(opend + 60*500)])
        myx = np.arange(opstart, opstart + len(abp_mov_avg), 1)
        plt.plot(myx, abp_mov_avg, 'red')

    plt.show()

Reality Check All Cases¶

In [48]:
# Global flag to control creating track and segment plots.
# These plots are expensive to create, but very interesting.
# Disable when training in bulk to speed up notebook processing.
PERFORM_TRACK_VALIDITY_CHECKS = True
In [49]:
# Check if all ABPs are well formed. Fast load and scan of the raw track data for ABP.
DISPLAY_REALITY_CHECK_ABP=True
DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_REALITY_CHECK_ABP:
    for case_id_to_check in cases_of_interest_idx:
        printAbp(case_id_to_check, plot_invalid_only=False)
        
        if DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY:
            break
Case 1
ABP Shape: (5771049, 1)
nanmin: -495.6260070800781
nanmean: 78.15254211425781
nanmax: 374.3236389160156
Presence of BP > 200
valid: False

Validate Malformed Vital Files - Missing One Or More Tracks¶

Cases which were found to be missing one or more data tracks are stored in malformed_tracks_filter.csv. These can be analyzed below:

In [50]:
# These are Vital Files removed because of malformed ABP waveforms.
DISPLAY_MALFORMED_ABP=True
DISPLAY_MALFORMED_ABP_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_MALFORMED_ABP:
    malformed_case_ids = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index

    for case_id_to_check in malformed_case_ids:
        printAbp(case_id_to_check)
        
        if DISPLAY_MALFORMED_ABP_FIRST_ONLY:
            break

Validate Cases With No Segments Saved¶

Cases which were found to not result in any extracted segments can be analyzed below to better understand why:

In [51]:
DISPLAY_NO_SEGMENTS_CASES=True
DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_NO_SEGMENTS_CASES:
    no_segments_case_ids = [3413, 3476, 3533, 3992, 4328, 4648, 4703, 4733, 5130, 5501, 5693, 5908]

    for case_id_to_check in no_segments_case_ids:
        printAbp(case_id_to_check)
        
        if DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY:
            break
Case 3413
ABP Shape: (3430848, 1)
nanmin: -228.025146484375
nanmean: 48.44272232055664
nanmax: 293.3521423339844
>10% NaN
valid: False

Select Case For Segment Extraction Validation¶

Generate segment data for one or more cases. Perform a deep analysis of event and segment quality.

In [52]:
# NOTE: This is always set so that if this section of checks is skipped, the model prediction plots will match.
my_cases_of_interest_idx = [84, 198, 60, 16, 27]

# Note: By default, match extract segments processing block above.
# However, regenerate data real time to allow seeing impacts on segment extraction.
# This is why both checkCache and forceWrite are false by default.
positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = None, None, None, None

if PERFORM_TRACK_VALIDITY_CHECKS:
    positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = \
        extract_segments(my_cases_of_interest_idx, debug=False,
                         checkCache=False, forceWrite=False, returnSegments=True,
                         skipInvalidCleanEvents=SKIP_INVALID_CLEAN_EVENTS,
                         skipInvalidIohEvents=SKIP_INVALID_IOH_EVENTS)
84: positiveSegments: 4, negativeSegments: 15
198: positiveSegments: 4, negativeSegments: 12
60: positiveSegments: 4, negativeSegments: 3
16: positiveSegments: 8, negativeSegments: 6
27: positiveSegments: 8, negativeSegments: 12

Select a specific case to perform detailed low level analysis.

In [53]:
case_id_to_check = my_cases_of_interest_idx[0]
print(case_id_to_check)
print()

if PERFORM_TRACK_VALIDITY_CHECKS:
    print((
        len(positiveSegmentsMap[case_id_to_check]),
        len(negativeSegmentsMap[case_id_to_check]),
        len(iohEventsMap[case_id_to_check]),
        len(cleanEventsMap[case_id_to_check])
    ))
84

(4, 15, 2, 7)
In [54]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printAbp(case_id_to_check)
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Presence of BP > 200
valid: False

Positive Events for Case - IOH Events¶

Used to define the range in front of which positive segments will be extracted. Positive samples happen in front of this region.

In [55]:
tmp_abp = None

if PERFORM_TRACK_VALIDITY_CHECKS:
    tmp_vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
    tmp_vf = vitaldb.VitalFile(tmp_vf_path)
    tmp_abp = tmp_vf.to_numpy(TRACK_NAMES[0], 1/500)
In [56]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printEvents(tmp_abp, iohEventsMap, case_id_to_check, 'IOH Event Segment', normalize=False)
IOH Event Segment: Case 84
start time: 10651
end time: 10903
length: 252 sec
ABP Shape: (126000, 1)
nanmin: 41.550628662109375
nanmean: 61.8976936340332
nanmax: 99.81057739257812
valid: True
IOH Event Segment: Case 84
start time: 10916
end time: 11030
length: 114 sec
ABP Shape: (57000, 1)
nanmin: -122.36724853515625
nanmean: 66.4285888671875
nanmax: 153.13327026367188
Presence of BP < 30
valid: False

Negative Events for Case - Non-IOH Events¶

Used to define the range from in which negative segments will be extracted. Negative samples happen within this region.

In [57]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printEvents(tmp_abp, cleanEventsMap, case_id_to_check, 'Clean Event Segment', normalize=False)
Clean Event Segment: Case 84
start time: 2396
end time: 4196
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 34.638397216796875
nanmean: 96.14398193359375
nanmax: 163.00784301757812
valid: True
Clean Event Segment: Case 84
start time: 4197
end time: 5997
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 59.324859619140625
nanmean: 90.35917663574219
nanmax: 145.23361206054688
valid: True
Clean Event Segment: Case 84
start time: 5998
end time: 7798
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 30.688568115234375
nanmean: 84.37336730957031
nanmax: 137.33395385742188
valid: True
Clean Event Segment: Case 84
start time: 7799
end time: 9599
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 43.525543212890625
nanmean: 85.18022918701172
nanmax: 144.24612426757812
valid: True
Clean Event Segment: Case 84
start time: 11031
end time: 12831
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: -495.6260070800781
nanmean: 86.6280746459961
nanmax: 147.20852661132812
Presence of BP < 30
valid: False
Clean Event Segment: Case 84
start time: 12832
end time: 14632
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: -29.546295166015625
nanmean: 88.14582061767578
nanmax: 169.92001342773438
Presence of BP < 30
valid: False
Clean Event Segment: Case 84
start time: 14633
end time: 16433
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 50.437713623046875
nanmean: 86.21431732177734
nanmax: 140.29629516601562
valid: True

Positive Segments for Case - IOH Events Predicted Using These¶

One minute regions sampled and used for training the model for "positive" events.

In [58]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printSegments(positiveSegmentsMap, case_id_to_check, 'Positive Segment - IOH Event', normalize=False)
Positive Segment - IOH Event: Case 84
lookback window: 3 min
start time: 10411
end time: 10471
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 64.26211547851562
nanmean: 88.7354965209961
nanmax: 125.48446655273438
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 5 min
start time: 10291
end time: 10351
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 86.20820617675781
nanmax: 122.52212524414062
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 10 min
start time: 9991
end time: 10051
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 84.10186004638672
nanmax: 119.55972290039062
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 15 min
start time: 9691
end time: 9751
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 62.287200927734375
nanmean: 87.69979095458984
nanmax: 126.47195434570312
valid: True

Negative Segments for Case - Non-IOH Events Predicted Using These¶

One minute regions sampled and used for training the model for "negative" events.

In [59]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printSegments(negativeSegmentsMap, case_id_to_check, 'Negative Segment - Non-Event', normalize=False)
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 2996
end time: 3056
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 69.19943237304688
nanmean: 97.4190673828125
nanmax: 140.29629516601562
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 3296
end time: 3356
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 69.19943237304688
nanmean: 94.19501495361328
nanmax: 133.38412475585938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 3596
end time: 3656
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 67.22451782226562
nanmean: 95.66307830810547
nanmax: 137.33395385742188
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 4797
end time: 4857
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 75.12417602539062
nanmean: 101.20699310302734
nanmax: 145.23361206054688
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 5097
end time: 5157
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 83.68433380126953
nanmax: 120.54721069335938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 5397
end time: 5457
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 82.43463134765625
nanmax: 119.55972290039062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 6598
end time: 6658
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 60.312286376953125
nanmean: 82.77767181396484
nanmax: 118.57229614257812
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 6898
end time: 6958
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 64.26211547851562
nanmean: 87.16991424560547
nanmax: 125.48446655273438
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 7198
end time: 7258
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 82.817138671875
nanmax: 119.55972290039062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8399
end time: 8459
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 86.23184204101562
nanmax: 121.53463745117188
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8699
end time: 8759
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 53.400115966796875
nanmean: 70.72852325439453
nanmax: 100.79806518554688
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8999
end time: 9059
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 76.20519256591797
nanmax: 106.72280883789062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15233
end time: 15293
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 82.36672973632812
nanmax: 120.54721069335938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15533
end time: 15593
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 58.337371826171875
nanmean: 91.39106750488281
nanmax: 135.35903930664062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15833
end time: 15893
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 53.400115966796875
nanmean: 81.62761688232422
nanmax: 122.52212524414062
valid: True

Overlay Plot of All Events and Segments Extracted¶

For each of the cases in my_cases_of_interest_idx overlay the results of event and segment extraction.

In [60]:
DISPLAY_OVERLAY_CHECK_ABP=True
DISPLAY_OVERLAY_CHECK_ABP_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_OVERLAY_CHECK_ABP:
    for case_id_to_check in my_cases_of_interest_idx:
        printAbpOverlay(case_id_to_check, positiveSegmentsMap, 
                        negativeSegmentsMap, iohEventsMap, cleanEventsMap, movingAverage=False)
        
        if DISPLAY_OVERLAY_CHECK_ABP_FIRST_ONLY:
            break
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
In [61]:
# Memory cleanup
del tmp_abp

Generate Train/Val/Test Splits¶

When case segments are stored to disk, the filename is intentionally constructed so that its metadata can be easily reconstructed. The format is as follows: {case}_{startX}_{predWindow}_{label}.h5, where {case} is the case ID, {startX} is the start index of the segment, in seconds, from the start of the .vital track, {predWindow} is the prediction window, which can be 3, 5, 10 or 15 minutes, and {label} is the label indicator of whether the segment is associated with a hypotensive event (label=1) or not (label=0).

In [62]:
def get_segment_attributes_from_filename(file_path):
    pieces = os.path.basename(file_path).split('_')
    case = int(pieces[0])
    startX = int(pieces[1])
    predWindow = int(pieces[2])
    label = pieces[3].replace('.h5', '')
    return (case, startX, predWindow, label)
In [63]:
count_negative_samples = 0
count_positive_samples = 0

samples = []

seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}"
filenames = [y for x in os.walk(seg_folder) for y in glob(os.path.join(x[0], '*.h5'))]

for filename in filenames:
    (case, start_x, pred_window, label) = get_segment_attributes_from_filename(filename)
    
    # only load cases for cases of interest; this folder could have segments for hundreds of cases
    if case not in cases_of_interest_idx:
        continue

    if pred_window == 0 or pred_window == PREDICTION_WINDOW or PREDICTION_WINDOW == 'ALL':
        #print((case, start_x, pred_window, label))
        if label == 'True':
            count_positive_samples += 1
        else:
            count_negative_samples += 1
        sample = (filename, label)
        samples.append(sample)

print()
print(f"samples loaded:         {len(samples):5} ")
print(f'count negative samples: {count_negative_samples:5}')
print(f'count positive samples: {count_positive_samples:5}')
samples loaded:         19324 
count negative samples: 13991
count positive samples:  5333
In [64]:
# Divide by cases
sample_cases = defaultdict(lambda: []) 

for fn, _ in samples:
    (case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
    sample_cases[case].append((fn, label))

# understand any missing cases of interest
sample_cases_idx = pd.Index(sample_cases.keys())
missing_case_ids = cases_of_interest_idx.difference(sample_cases_idx)
print(f'cases with no samples: {missing_case_ids.shape[0]}')
print(f'    {missing_case_ids}')
cases with no samples: 34
    Index([ 149,  268,  561,  641,  864,  979, 1158, 1174, 1317, 1600, 1957, 2158,
       2221, 2224, 2413, 2830, 2859, 3112, 3596, 3648, 3868, 4380, 4485, 4755,
       4783, 5080, 5204, 5266, 5755, 5782, 5871, 6275, 6331, 6360],
      dtype='int64')

Split data into training, validation, and test sets¶

Use 6:1:3 ratio and prevent samples from a single case from being split across different sets

Note: number of samples at each time point is not the same, because the first event can occur before the 3/5/10/15 minute mark

In [65]:
# Set target sizes
train_ratio = 0.6
val_ratio = 0.1
test_ratio = 1 - train_ratio - val_ratio # ensure ratios sum to 1

# Split samples into train and other
sample_cases_train, sample_cases_other = train_test_split(list(sample_cases.keys()), test_size=(1 - train_ratio), random_state=RANDOM_SEED)

# Split other into val and test
sample_cases_val, sample_cases_test = train_test_split(sample_cases_other, test_size=(test_ratio / (1 - train_ratio)), random_state=RANDOM_SEED)

# Check how many samples are in each set
print(f'Train/Val/Test Summary by Cases')
print(f"Train cases:  {len(sample_cases_train):5}, ({len(sample_cases_train) / len(sample_cases):.2%})")
print(f"Val cases:    {len(sample_cases_val):5}, ({len(sample_cases_val) / len(sample_cases):.2%})")
print(f"Test cases:   {len(sample_cases_test):5}, ({len(sample_cases_test) / len(sample_cases):.2%})")
print(f"Total cases:  {(len(sample_cases_train) + len(sample_cases_val) + len(sample_cases_test)):5}")
Train/Val/Test Summary by Cases
Train cases:   1637, (59.99%)
Val cases:      272, (9.97%)
Test cases:     820, (30.05%)
Total cases:   2729

Now that the cases have been split according to the desired ratio, assign all of the segments for each case into the target (train, validation, test) set:

In [66]:
sample_cases_train = set(sample_cases_train)
sample_cases_val = set(sample_cases_val)
sample_cases_test = set(sample_cases_test)

samples_train = []
samples_val = []
samples_test = []

for cid, segs in sample_cases.items():
    if cid in sample_cases_train:
        for seg in segs:
            samples_train.append(seg)
    if cid in sample_cases_val:
        for seg in segs:
            samples_val.append(seg)
    if cid in sample_cases_test:
        for seg in segs:
            samples_test.append(seg)
            
# Check how many samples are in each set
print(f'Train/Val/Test Summary by Events')
print(f"Train events:  {len(samples_train):5}, ({len(samples_train) / len(samples):.2%})")
print(f"Val events:    {len(samples_val):5}, ({len(samples_val) / len(samples):.2%})")
print(f"Test events:   {len(samples_test):5}, ({len(samples_test) / len(samples):.2%})")
print(f"Total events:  {(len(samples_train) + len(samples_val) + len(samples_test)):5}")
Train/Val/Test Summary by Events
Train events:  11665, (60.37%)
Val events:     1979, (10.24%)
Test events:    5680, (29.39%)
Total events:  19324

Validate train/val/test Splits¶

Verify the label distribution in each set:

In [67]:
PRINT_ALL_CASE_SPLIT_DETAILS = False

case_to_sample_distribution = defaultdict(lambda: {'train': [0, 0], 'val': [0, 0], 'test': [0, 0]})

def populate_case_to_sample_distribution(mysamples, idx):
    neg = 0
    pos = 0
    
    for fn, _ in mysamples:
        (case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
        slot = 0 if label == 'False' else 1
        case_to_sample_distribution[case][idx][slot] += 1
        if slot == 0:
            neg += 1
        else:
            pos += 1
                
    return (neg, pos)

train_neg, train_pos = populate_case_to_sample_distribution(samples_train, 'train')
val_neg, val_pos     = populate_case_to_sample_distribution(samples_val,   'val')
test_neg, test_pos   = populate_case_to_sample_distribution(samples_test,  'test')

print(f'Total Cases Present: {len(case_to_sample_distribution):5}')
print()

train_tot = train_pos + train_neg
val_tot = val_pos + val_neg
test_tot = test_pos + test_neg
print(f'Train: P: {train_pos:5} ({(train_pos/train_tot):.2}), N: {train_neg:5} ({(train_neg/train_tot):.2})')
print(f'Val:   P: {val_pos:5} ({(val_pos/val_tot):.2}), N: {val_neg:5} ({(val_neg/val_tot):.2})')
print(f'Test:  P: {test_pos:5} ({(test_pos/test_tot):.2}), N: {test_neg:5}  ({(test_neg/test_tot):.2})')
print()

total_pos = train_pos + val_pos + test_pos
total_neg = train_neg + val_neg + test_neg
total = total_pos + total_neg
print(f'P/N Ratio: {(total_pos)}:{(total_neg)}')
print(f'P Percent: {(total_pos/total):.2}')
print(f'N Percent: {(total_neg/total):.2}')
print()

if PRINT_ALL_CASE_SPLIT_DETAILS:
    for ci in sorted(case_to_sample_distribution.keys()):
        print(f'{ci}: {case_to_sample_distribution[ci]}')
Total Cases Present:  2729

Train: P:  3285 (0.28), N:  8380 (0.72)
Val:   P:   561 (0.28), N:  1418 (0.72)
Test:  P:  1487 (0.26), N:  4193  (0.74)

P/N Ratio: 5333:13991
P Percent: 0.28
N Percent: 0.72

Verify that no data has leaked between test sets:

In [68]:
def check_data_leakage(full_data, train_data, val_data, test_data):
    # Convert to sets for easier operations
    full_data_set = set(full_data)
    train_data_set = set(train_data)
    val_data_set = set(val_data)
    test_data_set = set(test_data)

    # Check if train, val, test are subsets of full_data
    if not train_data_set.issubset(full_data_set):
        return "Train data has leakage"
    if not val_data_set.issubset(full_data_set):
        return "Validation data has leakage"
    if not test_data_set.issubset(full_data_set):
        return "Test data has leakage"

    # Check if train, val, test are disjoint
    if train_data_set & val_data_set:
        return "Train and validation data are not disjoint"
    if train_data_set & test_data_set:
        return "Train and test data are not disjoint"
    if val_data_set & test_data_set:
        return "Validation and test data are not disjoint"

    return "No data leakage detected"

print(check_data_leakage(list(sample_cases.keys()), sample_cases_train, sample_cases_val, sample_cases_test))
No data leakage detected

Create a custom vitalDataset class derived from Dataset to be used by the data loaders:

In [69]:
# Create vitalDataset class
class vitalDataset(Dataset):
    def __init__(self, samples, normalize_abp=False):
        self.samples = samples
        self.normalize_abp = normalize_abp

    def __len__(self):
        return len(self.samples)

    def __getitem__(self, idx):
        # Get metadata for this event
        segment = self.samples[idx]

        file_path = segment[0]
        label = (segment[1] == "True" or segment[1] == "True.vital")

        (abp, ecg, eeg) = get_segment_data(file_path)

        if abp is None or eeg is None or ecg is None:
            return (np.zeros(30000), np.zeros(30000), np.zeros(7680), 0)
        
        if self.normalize_abp:
            abp -= 65
            abp /= 65

        return abp, ecg, eeg, label

NORMALIZE_ABP = False

train_dataset = vitalDataset(samples_train, NORMALIZE_ABP)
val_dataset = vitalDataset(samples_val, NORMALIZE_ABP)
test_dataset = vitalDataset(samples_test, NORMALIZE_ABP)

Train/val/test Splits Summary Statistics¶

Analyze the mean value distribution across each dataset in order to study and verify that their characteristics are in line:

In [70]:
def generate_nan_means(mydataset):
    xs = np.zeros(len(mydataset))
    ys = np.zeros(len(mydataset), dtype=int)

    for i, (abp, ecg, eeg, y) in enumerate(iter(mydataset)):
        xs[i] = np.nanmean(abp)
        ys[i] = int(y)

    return pd.DataFrame({'abp_nanmean': xs, 'label': ys})
In [71]:
def generate_nan_means_summaries(tr, va, te, group='all'):
    if group == 'all':
        return pd.DataFrame({
            'train': tr.describe()['abp_nanmean'],
            'validation': va.describe()['abp_nanmean'],
            'test': te.describe()['abp_nanmean']
        })
    
    mytr = tr.reset_index()
    myva = va.reset_index()
    myte = te.reset_index()
    
    label_flag = True if group == 'positive' else False
    
    return pd.DataFrame({
        'train':      mytr[mytr['label'] == label_flag].describe()['abp_nanmean'],
        'validation': myva[myva['label'] == label_flag].describe()['abp_nanmean'],
        'test':       myte[myte['label'] == label_flag].describe()['abp_nanmean']
    })
In [72]:
def plot_nan_means(df, plot_label):
    mydf = df.reset_index()

    maxCases = 'ALL' if MAX_CASES is None else MAX_CASES
    plot_title = f'{plot_label} - ABP nanmean Values, {PREDICTION_WINDOW} Minutes, {maxCases} Cases'
    
    ax = mydf[mydf['label'] == False].plot.scatter(
        x='index', y='abp_nanmean', color='DarkBlue', label='Negative', 
        title=plot_title, figsize=(16,9))

    negative_median = mydf[mydf['label'] == False]['abp_nanmean'].median()
    ax.axhline(y=negative_median, color='DarkBlue', linestyle='--', label='Negative Median')
    
    mydf[mydf['label'] == True].plot.scatter(
        x='index', y='abp_nanmean', color='DarkOrange', label='Positive', ax=ax);
    
    positive_median = mydf[mydf['label'] == True]['abp_nanmean'].median()
    ax.axhline(y=positive_median, color='DarkOrange', linestyle='--', label='Positive Median')
    
    ax.legend(loc='upper right')
In [73]:
def plot_nan_means_hist(df):
    df.plot.hist(column=['abp_nanmean'], by='label', bins=50, figsize=(10, 8));
In [74]:
train_abp_nanmeans = generate_nan_means(train_dataset)
val_abp_nanmeans = generate_nan_means(val_dataset)
test_abp_nanmeans = generate_nan_means(test_dataset)

ABP Nanmean Summaries¶

In [75]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans)
Out[75]:
train validation test
count 11665.000000 1979.000000 5680.000000
mean 85.280193 85.112286 85.286773
std 12.280448 11.551003 11.841797
min 65.136129 65.367918 65.154759
25% 75.612491 76.251820 76.134000
50% 83.419197 83.649439 83.697692
75% 93.314144 92.505598 93.093587
max 138.285504 147.949437 136.381225
In [76]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans, group='positive')
Out[76]:
train validation test
count 3285.000000 561.000000 1487.000000
mean 76.363533 76.421972 76.353014
std 9.231705 8.731134 9.058046
min 65.136129 65.367918 65.154759
25% 69.914073 69.981088 70.123858
50% 73.991152 74.319745 74.215963
75% 79.909642 80.241065 79.848930
max 132.202888 122.935320 136.381225
In [77]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans, group='negative')
Out[77]:
train validation test
count 8380.000000 1418.000000 4193.000000
mean 88.775566 88.550414 88.455030
std 11.538735 10.695512 11.069511
min 65.225560 66.473179 65.476802
25% 79.990967 80.760238 80.078761
50% 87.414185 86.901416 87.140528
75% 96.060508 95.399991 95.449323
max 138.285504 147.949437 130.780501

ABP Nanmean Histograms¶

In [78]:
plot_nan_means_hist(train_abp_nanmeans)
In [79]:
plot_nan_means_hist(val_abp_nanmeans)
In [80]:
plot_nan_means_hist(test_abp_nanmeans)

ABP Nanmean Scatter Plots¶

In [81]:
plot_nan_means(train_abp_nanmeans, 'Train')
In [82]:
plot_nan_means(val_abp_nanmeans, 'Validation')
In [83]:
plot_nan_means(test_abp_nanmeans, 'Test')
In [84]:
# Memory cleanup
del train_abp_nanmeans
del val_abp_nanmeans
del test_abp_nanmeans

Classification Studies¶

Check if data can be easily classified using non-deep learning methods. Create a balanced sample of IOH and non-IOH events and use a simple classifier to see if the data can be easily separated. Datasets which can be easily separated by non-deep learning methods should also be easily classified by deep learning models.

In [85]:
MAX_CLASSIFICATION_SAMPLES = 250
MAX_SAMPLE_SIZE = 1600
classification_sample_size = MAX_SAMPLE_SIZE if len(samples) >= MAX_SAMPLE_SIZE else len(samples)

classification_samples = random.sample(samples, classification_sample_size)

positive_samples = []
negative_samples = []

for sample in classification_samples:
    (sampleAbp, sampleEcg, sampleEeg) = get_segment_data(sample[0])
    
    if sample[1] == "True":
        positive_samples.append([sample[0], True, sampleAbp, sampleEcg, sampleEeg])
    else:
        negative_samples.append([sample[0], False, sampleAbp, sampleEcg, sampleEeg])

positive_samples = pd.DataFrame(positive_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])
negative_samples = pd.DataFrame(negative_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])

total_to_sample_pos = MAX_CLASSIFICATION_SAMPLES if len(positive_samples) >= MAX_CLASSIFICATION_SAMPLES else len(positive_samples)
total_to_sample_neg = MAX_CLASSIFICATION_SAMPLES if len(negative_samples) >= MAX_CLASSIFICATION_SAMPLES else len(negative_samples)

# Select up to 150 random samples where segment_label is True
positive_samples = positive_samples.sample(total_to_sample_pos, random_state=RANDOM_SEED)
# Select up to 150 random samples where segment_label is False
negative_samples = negative_samples.sample(total_to_sample_neg, random_state=RANDOM_SEED)

print(f'positive_samples: {len(positive_samples)}')
print(f'negative_samples: {len(negative_samples)}')

# Combine the positive and negative samples
samples_balanced = pd.concat([positive_samples, negative_samples])
positive_samples: 250
negative_samples: 250

Define function to build data for study. Each waveform field can be enabled or disabled:

In [86]:
def get_x_y(samples, use_abp, use_ecg, use_eeg):
    # Create X and y, using data from `samples_balanced` and the `use_abp`, `use_ecg`, and `use_eeg` variables
    X = []
    y = []
    for i in range(len(samples)):
        row = samples.iloc[i]
        sample = np.array([])
        if use_abp:
            if len(row['segment_abp']) != 30000:
                print(len(row['segment_abp']))
            sample = np.append(sample, row['segment_abp'])
        if use_ecg:
            if len(row['segment_ecg']) != 30000:
                print(len(row['segment_ecg']))
            sample = np.append(sample, row['segment_ecg'])
        if use_eeg:
            if len(row['segment_eeg']) != 7680:
                print(len(row['segment_eeg']))
            sample = np.append(sample, row['segment_eeg'])
        X.append(sample)
        # Convert the label from boolean to 0 or 1
        y.append(int(row['segment_label']))
    return X, y

KNN¶

Define KNN run. This is configurable to enable or disable different data channels so that we can study them individually or together:

In [87]:
N_NEIGHBORS = 20

def run_knn(samples, use_abp, use_ecg, use_eeg):
    # Get samples
    X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)

    # Split samples into train and val
    knn_X_train, knn_X_test, knn_y_train, knn_y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_SEED)

    # Normalize the data
    scaler = StandardScaler()
    scaler.fit(knn_X_train)

    knn_X_train = scaler.transform(knn_X_train)
    knn_X_test = scaler.transform(knn_X_test)

    # Initialize the KNN classifier
    knn = KNeighborsClassifier(n_neighbors=N_NEIGHBORS)

    # Train the KNN classifier
    knn.fit(knn_X_train, knn_y_train)

    # Make predictions on the test set
    knn_y_pred = knn.predict(knn_X_test)

    # Evaluate the KNN classifier
    print(f"ABP: {use_abp}, ECG: {use_ecg}, EEG: {use_eeg}")
    print(f"Confusion matrix:\n{confusion_matrix(knn_y_test, knn_y_pred)}")
    print(f"Classification report:\n{classification_report(knn_y_test, knn_y_pred)}")

Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:

In [88]:
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)
ABP: True, ECG: False, EEG: False
Confusion matrix:
[[48  6]
 [20 26]]
Classification report:
              precision    recall  f1-score   support

           0       0.71      0.89      0.79        54
           1       0.81      0.57      0.67        46

    accuracy                           0.74       100
   macro avg       0.76      0.73      0.73       100
weighted avg       0.75      0.74      0.73       100

ABP: False, ECG: True, EEG: False
Confusion matrix:
[[32 22]
 [21 25]]
Classification report:
              precision    recall  f1-score   support

           0       0.60      0.59      0.60        54
           1       0.53      0.54      0.54        46

    accuracy                           0.57       100
   macro avg       0.57      0.57      0.57       100
weighted avg       0.57      0.57      0.57       100

ABP: False, ECG: False, EEG: True
Confusion matrix:
[[ 6 48]
 [ 6 40]]
Classification report:
              precision    recall  f1-score   support

           0       0.50      0.11      0.18        54
           1       0.45      0.87      0.60        46

    accuracy                           0.46       100
   macro avg       0.48      0.49      0.39       100
weighted avg       0.48      0.46      0.37       100

ABP: True, ECG: False, EEG: True
Confusion matrix:
[[42 12]
 [17 29]]
Classification report:
              precision    recall  f1-score   support

           0       0.71      0.78      0.74        54
           1       0.71      0.63      0.67        46

    accuracy                           0.71       100
   macro avg       0.71      0.70      0.71       100
weighted avg       0.71      0.71      0.71       100

ABP: True, ECG: True, EEG: True
Confusion matrix:
[[34 20]
 [12 34]]
Classification report:
              precision    recall  f1-score   support

           0       0.74      0.63      0.68        54
           1       0.63      0.74      0.68        46

    accuracy                           0.68       100
   macro avg       0.68      0.68      0.68       100
weighted avg       0.69      0.68      0.68       100

Based on the data above, the ABP and ABP+EEG data are somewhat predictive based on the macro average F1-score, the ECG and EEG data are weakly predictive, and ABP+ECG+EEG data somewhat less predictive than either of ABP or ABP+EEG.

Models based on ABP data alone, or ABP+EEG data are expected to train well with good performance. The other signals appear to mostly add noise and are not strongly predictive. This agrees with the results from the paper.

t-SNE¶

Define t-SNE run. This is configurable to enable or disable different data channels so that we can study them individually or together:

In [89]:
def run_tsne(samples, use_abp, use_ecg, use_eeg):
    # Get samples
    X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)
    
    # Convert X and y to numpy arrays
    X = np.array(X)
    y = np.array(y)

    # Run t-SNE on the samples
    tsne = TSNE(n_components=len(np.unique(y)), random_state=RANDOM_SEED)
    X_tsne = tsne.fit_transform(X)
    
    # Create a scatter plot of the t-SNE representation
    plt.figure(figsize=(16, 9))
    plt.title(f"use_abp={use_abp}, use_ecg={use_ecg}, use_eeg={use_eeg}")
    for i, label in enumerate(set(y)):
        plt.scatter(X_tsne[y == label, 0], X_tsne[y == label, 1], label=label)
    plt.legend()
    plt.show()

Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:

In [90]:
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)

Based on the plots above, it appears that ABP alone, ABP+EEG and ABP+ECG+EEG are somewhat separable, though with outliers, and should be trainable by our model. The ECG and EEG data are not readily separable from the other data. This agrees with the results from the paper.

In [91]:
# Memory cleanup
del samples_balanced

Model¶

The model implementation is based on the CNN architecture described in Jo Y-Y et al. (2022). It is designed to handle 1, 2, or 3 biosignal waveforms simultaneously, allowing for flexible model configurations based on different combinations of physiological data:

  • ABP alone
  • EEG alone
  • ECG alone
  • ABP + EEG
  • ABP + ECG
  • EEG + ECG
  • ABP + EEG + ECG

Model Architecture¶

The architecture, as depicted in Figure 2 from the original paper, utilizes a ResNet-based approach tailored for time-series data from different physiological signals. The model architecture is adapted to handle varying input signal frequencies, with specific hyperparameters for each signal type, particularly EEG, due to its distinct characteristics compared to ABP and ECG. A diagram of the model architecture is shown below:

Architecture of the hypotension risk prediction model using multiple waveforms

Each input signal is processed through a sequence of 12 7-layer residual blocks, followed by a flattening process and a linear transformation to produce a 32-dimensional feature vector per signal type. These vectors are then concatenated (if multiple signals are used) and passed through two additional linear layers to produce a single output vector, representing the IOH index. A threshold is determined experimentally in order to minimize the differene between the sensitivity and specificity and is applied to this index to perform binary classification for predicting IOH events.

The hyperparameters for the residual blocks are specified in Supplemental Table 1 from the original paper and vary for different signal type.

A forward pass through the model passes through 85 layers before concatenation, followed by two more linear layers and finally a sigmoid activation layer to produce the prediction measure.

Residual Block Definition¶

Each residual block consists of the following seven layers:

  • Batch normalization
  • ReLU
  • Dropout (0.5)
  • 1D convolution
  • Batch normalization
  • ReLU
  • 1D convolution

Skip connections are included to aid in gradient flow during training, with optional 1D convolution in the skip connection to align dimensions.

Residual Block Hyperparameters¶

The hyperparameters are detailed in Supplemental Table 1 of the original paper. A screenshot of these hyperparameters is provided for reference below:

Supplemental Table 1 from original paper

Note: Please be aware of a transcription error in the original paper's Supplemental Table 1 for the ECG+ABP configuration in Residual Blocks 11 and 12, where the output size should be 469 6 instead of the reported 496 6.

In [92]:
# Define the residual block which is implemented for each biosignal path
class ResidualBlock(nn.Module):
    def __init__(self, in_features: int, out_features: int, in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, size_down: bool = False, ignoreSkipConnection: bool = False) -> None:
        super(ResidualBlock, self).__init__()
        
        self.ignoreSkipConnection = ignoreSkipConnection

        # calculate the appropriate padding required to ensure expected sequence lengths out of each residual block
        padding = int((((stride-1)*in_features)-stride+kernel_size)/2)

        self.size_down = size_down
        self.bn1 = nn.BatchNorm1d(in_channels)
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(0.5)
        self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
        self.bn2 = nn.BatchNorm1d(out_channels)
        self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
        
        self.residualConv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)

        # unclear where in sequence this should take place. Size down expressed in Supplemental table S1
        if self.size_down:
            pool_padding = (1 if (in_features % 2 > 0) else 0)
            self.downsample = nn.MaxPool1d(kernel_size=2, stride=2, padding = pool_padding)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        identity = x

        out = self.bn1(x)
        out = self.relu(out)
        out = self.dropout(out)
        out = self.conv1(out)

        if self.size_down:
            out = self.downsample(out)

        out = self.bn2(out)
        out = self.relu(out)
        out = self.conv2(out)

        if not self.ignoreSkipConnection:
          if out.shape != identity.shape:
              # run the residual through a convolution when necessary
              identity = self.residualConv(identity)

              outlen = np.prod(out.shape)
              idlen = np.prod(identity.shape)
              # downsample when required
              if idlen > outlen:
                  identity = self.downsample(identity)
              # match dimensions
              identity = identity.reshape(out.shape)

          # add the residual       
          out += identity

        return  out

# Define the parameterizable model
class HypotensionCNN(nn.Module):
    def __init__(self, useAbp: bool = True, useEeg: bool = False, useEcg: bool = False, device: str = "cpu", nResiduals: int = 12, ignoreSkipConnection: bool = False, useSigmoid: bool = True) -> None:
        assert useAbp or useEeg or useEcg, "At least one data track must be used"
        assert nResiduals > 0 and nResiduals <= 12, "Number of residual blocks must be between 1 and 12"
        super(HypotensionCNN, self).__init__()

        self.device = device

        self.useAbp = useAbp
        self.useEeg = useEeg
        self.useEcg = useEcg
        self.nResiduals = nResiduals
        self.useSigmoid = useSigmoid

        # Size of the concatenated output from the residual blocks
        concatSize = 0

        if useAbp:
          self.abpBlocks = []
          self.abpMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.abpSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]
          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.abpBlocks.append(ResidualBlock(self.abpSizes[i], self.abpSizes[i+1], self.abpMultipliers[i], self.abpMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
          self.abpResiduals = nn.Sequential(*self.abpBlocks)
          self.abpFc = nn.Linear(self.abpMultipliers[self.nResiduals] * self.abpSizes[self.nResiduals], 32)
          concatSize += 32
        
        if useEcg:
          self.ecgBlocks = []
          self.ecgMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.ecgSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]

          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.ecgBlocks.append(ResidualBlock(self.ecgSizes[i], self.ecgSizes[i+1], self.ecgMultipliers[i], self.ecgMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
          self.ecgResiduals = nn.Sequential(*self.ecgBlocks)
          self.ecgFc = nn.Linear(self.ecgMultipliers[self.nResiduals] * self.ecgSizes[self.nResiduals], 32)
          concatSize += 32

        if useEeg:
          self.eegBlocks = []
          self.eegMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.eegSizes = [7680, 3840, 3840, 1920, 1920, 960, 960, 480, 480, 240, 240, 120, 120]

          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.eegBlocks.append(ResidualBlock(self.eegSizes[i], self.eegSizes[i+1], self.eegMultipliers[i], self.eegMultipliers[i+1], 7 if i < 6 else 3, 1, downsample, ignoreSkipConnection))
          self.eegResiduals = nn.Sequential(*self.eegBlocks)
          self.eegFc = nn.Linear(self.eegMultipliers[self.nResiduals] * self.eegSizes[self.nResiduals], 32)
          concatSize += 32

        # The fullLinear1 layer accepts the outputs of the concatenation of the ResidualBlocks from each biosignal path
        self.fullLinear1 = nn.Linear(concatSize, 16)
        self.fullLinear2 = nn.Linear(16, 1)
        self.sigmoid = nn.Sigmoid()


    def forward(self, abp: torch.Tensor, eeg: torch.Tensor, ecg: torch.Tensor) -> torch.Tensor:
        batchSize = len(abp)

        # conditionally operate ABP, EEG, and ECG networks
        tensors = []
        if self.useAbp:
          self.abpResiduals.to(self.device)
          abp = self.abpResiduals(abp)
          totalLen = np.prod(abp.shape)
          abp = torch.reshape(abp, (batchSize, int(totalLen / batchSize)))
          abp = self.abpFc(abp)
          tensors.append(abp)

        if self.useEeg:
          self.eegResiduals.to(self.device)
          eeg = self.eegResiduals(eeg)
          totalLen = np.prod(eeg.shape)
          eeg = torch.reshape(eeg, (batchSize, int(totalLen / batchSize)))
          eeg = self.eegFc(eeg)
          tensors.append(eeg)
        
        if self.useEcg:
          self.ecgResiduals.to(self.device)
          ecg = self.ecgResiduals(ecg)
          totalLen = np.prod(ecg.shape)
          ecg = torch.reshape(ecg, (batchSize, int(totalLen / batchSize)))
          ecg = self.ecgFc(ecg)
          tensors.append(ecg)

        # concatenate the tensors along dimension 1 if there's more than one, otherwise use the single tensor
        merged = torch.cat(tensors, dim=1) if len(tensors) > 1 else tensors[0]

        totalLen = np.prod(merged.shape)
        merged = torch.reshape(merged, (batchSize, int(totalLen / batchSize)))
        out = self.fullLinear1(merged)
        out = self.fullLinear2(out)
        # Skip the final model sigmoid when using BCEWithLogitsLoss loss function
        if self.useSigmoid:
            out = self.sigmoid(out)

        return out

Training¶

The training loop is highly parameterizable, and all aspects can be configured. The original paper uses binary cross entropy as the loss function with Adam as the optimizer, a learning rate of 0.0001, and with training configured to run for up to 100 epochs, with early stopping implemented if no improvement in loss is observed over five consecutive epochs. Our models were run with the same parameters, but longer patience values to account for the noisier and smaller dataset that we had access to.

Define a function to train the model for one epoch. Collect the losses so the mean can be reported.

In [93]:
def train_model_one_iter(model, device, loss_func, optimizer, train_loader):
    model.train()
    train_losses = []
    
    for abp, ecg, eeg, label in tqdm(train_loader):
        batch = len(abp)
        abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        label = label.type(torch.float).reshape(batch, 1).to(device)

        optimizer.zero_grad()
        mdl = model(abp, eeg, ecg)
        loss = loss_func(torch.nan_to_num(mdl), label)
        loss.backward()
        optimizer.step()
        train_losses.append(loss.cpu().data.numpy())
    return np.mean(train_losses)

Evaluate the model using the the provided loss function. This is typically called on the validation dataset at each epoch:

In [94]:
def evaluate_model(model, loss_func, val_loader):
    model.eval()
    val_losses = []
    for abp, ecg, eeg, label in tqdm(val_loader):
        batch = len(abp)

        abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        label = label.type(torch.float).reshape(batch, 1).to(device)

        mdl = model(abp, eeg, ecg)
        loss = loss_func(torch.nan_to_num(mdl), label)
        val_losses.append(loss.cpu().data.numpy())
    return np.mean(val_losses)

Define a function to plot the training and validation losses from the entire training run and indicate at which epoch the validation loss was minimized. This is typically patience epochs before the end of training:

In [95]:
def plot_losses(train_losses, val_losses, best_epoch, experimentName):
    print()
    print(f'Plot Validation and Loss Values from Training')
    print(f'  Epoch with best Validation Loss:  {best_epoch:3}, {val_losses[best_epoch]:.4}')

    # Create x-axis values for epochs
    epochs = range(0, len(train_losses))

    plt.figure(figsize=(16, 9))

    # Plot the training and validation losses
    plt.plot(epochs, train_losses, 'b', label='Training Loss')
    plt.plot(epochs, val_losses, 'r', label='Validation Loss')

    # Add a vertical bar at the best_epoch
    plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch')

    # Shade everything to the right of the best_epoch a light red
    plt.axvspan(best_epoch, max(epochs), facecolor='r', alpha=0.1)

    # Add labels and title
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.title(experimentName)

    # Add legend
    plt.legend(loc='upper right')

    # Save plot to disk
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_losses.png'))

    # Show the plot
    plt.show()

Define a function to calculate the complete performance metric profile of a model. As in the original paper, the threshold is found as the argmin of the Δ(sensitivity, specificity):

In [96]:
def eval_model(model, device, dataloader, loss_func, print_detailed: bool = False):
    model.eval()
    model = model.to(device)
    total_loss = 0
    all_predictions = []
    all_labels = []

    with torch.no_grad():
        for abp, ecg, eeg, label in tqdm(dataloader):
            batch = len(abp)
    
            abp = torch.nan_to_num(abp.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            ecg = torch.nan_to_num(ecg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            eeg = torch.nan_to_num(eeg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            label = label.type(torch.float).reshape(batch, 1).to(device)
   
            pred = model(abp, eeg, ecg)
            loss = loss_func(pred, label)
            total_loss += loss.item()

            all_predictions.append(pred.detach().cpu().numpy())
            all_labels.append(label.detach().cpu().numpy())

    # Flatten the lists
    all_predictions = np.concatenate(all_predictions).flatten()
    all_labels = np.concatenate(all_labels).flatten()

    # Calculate AUROC and AUPRC
    # y_true, y_pred
    auroc = roc_auc_score(all_labels, all_predictions)
    precision, recall, _ = precision_recall_curve(all_labels, all_predictions)
    auprc = auc(recall, precision)

    # Determine the optimal threshold, which is argmin(abs(sensitivity - specificity)) per the paper
    thresholds = np.linspace(0, 1, 101) # 0 to 1 in 0.01 steps
    min_diff = float('inf')
    optimal_sensitivity = None
    optimal_specificity = None
    optimal_threshold = None

    for threshold in thresholds:
        all_predictions_binary = (all_predictions > threshold).astype(int)

        tn, fp, fn, tp = confusion_matrix(all_labels, all_predictions_binary).ravel()
        sensitivity = tp / (tp + fn)
        specificity = tn / (tn + fp)
        diff = abs(sensitivity - specificity)

        if diff < min_diff:
            min_diff = diff
            optimal_threshold = threshold
            optimal_sensitivity = sensitivity
            optimal_specificity = specificity

    avg_loss = total_loss / len(dataloader)
    
    # accuracy
    predictions_binary = (all_predictions > optimal_threshold).astype(int)
    accuracy = np.mean(predictions_binary == all_labels)

    if print_detailed:
        print(f"Predictions: {all_predictions}")
        print(f"Labels: {all_labels}")
    print(f"Loss: {avg_loss}")
    print(f"AUROC: {auroc}")
    print(f"AUPRC: {auprc}")
    print(f"Sensitivity: {optimal_sensitivity}")
    print(f"Specificity: {optimal_specificity}")
    print(f"Threshold: {optimal_threshold}")
    print(f"Accuracy:  {accuracy}")

    return all_predictions, all_labels, avg_loss, auroc, auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, accuracy

Define a function to calculate and print the AUROC and AURPC values for each epoch of a training run:

In [97]:
def print_all_evals(model, models, device, val_loader, test_loader, loss_func, print_detailed: bool = False):
    print()
    print(f'Generate AUROC/AUPRC for Each Intermediate Model')
    print()
    val_aurocs = []
    val_auprcs = []
    val_accs   = []

    test_aurocs = []
    test_auprcs = []
    test_accs   = []

    for mod in models:
        model.load_state_dict(torch.load(mod))
        #model.train(False)
        model.eval()
        print(f'Intermediate Model:')
        print(f'  {mod}')
    
        # validation loop
        print("AUROC/AUPRC on Validation Data")
        all_predictions, all_labels, avg_loss, valid_auroc, valid_auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, valid_accuracy = \
            eval_model(model, device, val_loader, loss_func, print_detailed)

        val_aurocs.append(valid_auroc)
        val_auprcs.append(valid_auprc)
        val_accs.append(valid_accuracy)
        print()
    
        # test loop
        print("AUROC/AUPRC on Test Data")
        all_predictions, all_labels, avg_loss, test_auroc, test_auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, test_accuracy = \
            eval_model(model, device, test_loader, loss_func, print_detailed)

        test_aurocs.append(test_auroc)
        test_auprcs.append(test_auprc)
        test_accs.append(test_accuracy)
        print()
    
    return val_aurocs, val_auprcs, val_accs, test_aurocs, test_auprcs, test_accs

Define a function to plot the AUROC, AUPRC and accuracy at each epoch and print the parameters for the best epoch on validation loss, AUROC and accuracy:

In [98]:
def plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, val_accs, 
                                      test_aurocs, test_auprcs, test_accs, all_models, best_epoch, experimentName):
    print()
    print(f'Plot AUROC/AUPRC for Each Intermediate Model')
    
    # Create x-axis values for epochs
    epochs = range(0, len(val_aurocs))

    # Find model with highest AUROC
    np_test_aurocs = np.array(test_aurocs)
    test_auroc_idx = np.argmax(np_test_aurocs)
    test_accs_idx  = np.argmax(test_accs)

    print(f'  Epoch with best Validation Loss:     {best_epoch:3}, {val_losses[best_epoch]:.4}')
    print(f'  Epoch with best model Test AUROC:    {test_auroc_idx:3}, {np_test_aurocs[test_auroc_idx]:.4}')
    print(f'  Epoch with best model Test Accuracy: {test_accs_idx:3}, {test_accs[test_accs_idx]:.4}')
    print()

    plt.figure(figsize=(16, 9))

    # Plots
    plt.plot(epochs, val_aurocs, 'C0', label='AUROC - Validation')
    plt.plot(epochs, test_aurocs, 'C1', label='AUROC - Test')

    plt.plot(epochs, val_auprcs, 'C2', label='AUPRC - Validation')
    plt.plot(epochs, test_auprcs, 'C3', label='AUPRC - Test')
    
    plt.plot(epochs, val_accs, 'C4', label='Accuracy - Validation')
    plt.plot(epochs, test_accs, 'C5', label='Accuracy - Test')

    # Add vertical bars
    plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch - Validation Loss')
    plt.axvline(x=test_auroc_idx, color='maroon', linestyle='--', label='Best Epoch - Test AUROC')
    plt.axvline(x=test_accs_idx, color='violet', linestyle='--', label='Best Epoch - Test Accuracy')

    # Shade everything to the right of the best_model a light red
    plt.axvspan(test_auroc_idx, max(epochs), facecolor='r', alpha=0.1)

    # Add labels and title
    plt.xlabel('Epochs')
    plt.ylabel('AUROC / AUPRC')
    plt.title('Validation and Test AUROC and AUPRC by Model Iteration Across Training')

    # Add legend
    plt.legend(loc='right')

    # Save plot to disk
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_all_stats.png'))
    
    # Show the plot
    plt.show()

    return np_test_aurocs, test_auroc_idx

Define a function to make predictions on a given:

In [99]:
# applies the model to a given real case to generate predictions
def predictionsForModel(case_id_to_check, my_model, my_model_state, device, ready_model=None):
    (abp, ecg, eeg, event) = get_track_data(case_id_to_check)
    
    opstart = cases.loc[case_id_to_check]['opstart'].item()
    opend = cases.loc[case_id_to_check]['opend'].item()

    abp = abp[opstart*500:opend*500]
    ecg = ecg[opstart*500:opend*500]
    eeg = eeg[opstart*128:opend*128]
    
    # number of one minute segments in each track
    splits_abp = abp.shape[0] // (60 * 500)
    splits_ecg = ecg.shape[0] // (60 * 500)
    splits_eeg = eeg.shape[0] // (60 * 128)
    
    # predict as long as each track has data in the prediction window
    splits = np.min([splits_abp, splits_ecg, splits_eeg])
    
    preds = []
    
    the_model = None
    
    if ready_model is None:
        my_model.load_state_dict(torch.load(my_model_state))
        my_model.eval()
        my_model = my_model.to(device)
        the_model = my_model
    else:
        ready_model.eval()
        ready_model = ready_model.to(device)
        the_model = ready_model
    
    for i in range(splits):
        t_abp = abp[i*60*500:(i + 1)*60*500]
        t_ecg = ecg[i*60*500:(i + 1)*60*500]
        t_eeg = eeg[i*60*128:(i + 1)*60*128]
    
        if len(t_abp) < 30000:
            t_abp = np.resize(t_abp, (30000))
            
        if len(t_ecg) < 30000:
            t_ecg = np.resize(t_ecg, (30000))
            
        if len(t_eeg) < 7680:
            t_eeg = np.resize(t_eeg, (7680))
            
        t_abp = torch.from_numpy(t_abp)
        t_ecg = torch.from_numpy(t_ecg)
        t_eeg = torch.from_numpy(t_eeg)
        
        t_abp = torch.nan_to_num(t_abp.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)
        t_ecg = torch.nan_to_num(t_ecg.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)
        t_eeg = torch.nan_to_num(t_eeg.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)

        pred = the_model(t_abp, t_eeg, t_ecg)
        preds.append(pred.detach().cpu().numpy())
    
    return np.concatenate(preds).flatten()

Define a function to plot the mean ABP and predictions for a case:

In [100]:
def printModelPrediction(case_id_to_check, preds, experimentName):  
    (abp, ecg, eeg, event) = get_track_data(case_id_to_check)
    
    opstart = cases.loc[case_id_to_check]['opstart'].item()
    opend = cases.loc[case_id_to_check]['opend'].item()
    minutes = (opend - opstart) / 60
    
    plt.figure(figsize=(24, 8))
    plt.margins(0)
    plt.title(f'ABP - Mean Arterial Pressure - Case: {case_id_to_check} - Operating Time: {minutes} minutes')
    plt.axhline(y = 65, color = 'maroon', linestyle = '--')
    
    opstart = opstart * 500
    opend = opend * 500
    
    minute_step = 5
    
    abp_mov_avg = moving_average(abp[opstart:(opend + 60*500)])
    myx = np.arange(opstart, opstart + len(abp_mov_avg), 1)
    plt.plot(myx, abp_mov_avg, 'purple')
    x_ticks = np.arange(opstart, opend, step=minute_step*30000)
    x_labels = [str(i*minute_step) for i in range(len(x_ticks))]
    plt.xticks(x_ticks, labels=x_labels)
    if experimentName is not None:
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_{case_id_to_check:04d}_surgery_map.png'))
    plt.show()
    
    plt.figure(figsize=(24, 8))
    plt.margins(0)
    plt.title(f'Model Predictions for One Minute Intervals Using {PREDICTION_WINDOW} Minute Prediction Window')
    plt.plot(preds)
    x_ticks = np.arange(0, len(preds), step=minute_step)
    x_labels = [str(i*minute_step) for i in range(len(x_ticks))]
    plt.xticks(x_ticks, labels=x_labels)
    if experimentName is not None:
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_{case_id_to_check:04d}_surgery_predictions.png'))
    plt.show()
    
    return preds

Define a function to run an experiment, which includes training a model and evaluating it.

In [101]:
def run_experiment(
    experimentNamePrefix: str = None,
    useAbp: bool = True, 
    useEeg: bool = False, 
    useEcg: bool = False, 
    nResiduals: int = 12, 
    skip_connection: bool = False, 
    batch_size: int = 64, 
    learning_rate: float = 1e-4, 
    weight_decay: float = 0.0, 
    pos_weight: float = None,
    max_epochs: int = 100, 
    patience: int = 25, 
    device: str = "cpu"
):
    reset_random_state()

    time_start = timer()

    experimentName = ""

    experimentOptions = [experimentNamePrefix, 'ABP', 'EEG', 'ECG', 'SKIPCONNECTION']
    experimentValues = [experimentNamePrefix is not None, useAbp, useEeg, useEcg, skip_connection]
    experimentFlags = [name for name, value in zip(experimentOptions, experimentValues) if value]
    if experimentFlags:
        experimentName = "_".join(experimentFlags)

    experimentName = f"{experimentName}_{nResiduals}_RESIDUAL_BLOCKS_{batch_size}_BATCH_SIZE_{learning_rate:.0e}_LEARNING_RATE"

    if weight_decay is not None and weight_decay != 0.0:
        experimentName = f"{experimentName}_{weight_decay:.0e}_WEIGHT_DECAY"

    predictionWindow = 'ALL' if PREDICTION_WINDOW == 'ALL' else f'{PREDICTION_WINDOW:03}'
    experimentName = f"{experimentName}_{predictionWindow}_MINS"

    maxCases = '_ALL' if MAX_CASES is None else f'{MAX_CASES:04}'
    experimentName = f"{experimentName}_{maxCases}_MAX_CASES"
    
    # Add unique uuid8 suffix to experiment name
    experimentName = f"{experimentName}_{uuid.uuid4().hex[:8]}"

    # Fork stdout to file and console
    with ForkedStdout(os.path.join(VITAL_RUNS, f'{experimentName}.log')):
        print(f"Experiment Setup")
        print(f'  name:              {experimentName}')
        print(f'  prediction_window: {predictionWindow}')
        print(f'  max_cases:         {maxCases}')
        print(f'  use_abp:           {useAbp}')
        print(f'  use_eeg:           {useEeg}')
        print(f'  use_ecg:           {useEcg}')
        print(f'  n_residuals:       {nResiduals}')
        print(f'  skip_connection:   {skip_connection}')
        print(f'  batch_size:        {batch_size}')
        print(f'  learning_rate:     {learning_rate}')
        print(f'  weight_decay:      {weight_decay}')
        if pos_weight is not None:
            print(f'  pos_weight:        {pos_weight}')
        print(f'  max_epochs:        {max_epochs}')
        print(f'  patience:          {patience}')
        print(f'  device:            {device}')
        print()

        train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
        val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)

        # Disable final sigmoid activation for BCEWithLogitsLoss
        model = HypotensionCNN(useAbp, useEeg, useEcg, device, nResiduals, skip_connection, useSigmoid=(pos_weight is None))
        model = model.to(device)
    
        if pos_weight is not None:
            # Apply weights to positive class
            loss_func = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([pos_weight]).to(device))
        else:
            loss_func = nn.BCELoss()
        optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)

    
        print(f'Model Architecture')
        print(model)
        print()

        print(f'Training Loop')
        # Training loop
        best_epoch = 0
        train_losses = []
        val_losses = []
        best_loss = float('inf')
        no_improve_epochs = 0
        model_path = os.path.join(VITAL_MODELS, f"{experimentName}.model")

        all_models = []

        for i in range(max_epochs):
            # Train the model and get the training loss
            train_loss = train_model_one_iter(model, device, loss_func, optimizer, train_loader)
            train_losses.append(train_loss)
            # Calculate validate loss
            val_loss = evaluate_model(model, loss_func, val_loader)
            val_losses.append(val_loss)
            print(f"[{datetime.now()}] Completed epoch {i} with training loss {train_loss:.8f}, validation loss {val_loss:.8f}")

            # Save all intermediary models.
            tmp_model_path = os.path.join(VITAL_MODELS, f"{experimentName}_{i:04d}.model")
            torch.save(model.state_dict(), tmp_model_path)
            all_models.append(tmp_model_path)
  
            # Check if validation loss has improved
            if val_loss < best_loss:
                best_epoch = i
                best_loss = val_loss
                no_improve_epochs = 0
                torch.save(model.state_dict(), model_path)
                print(f"Validation loss improved to {val_loss:.8f}. Model saved.")
            else:
                no_improve_epochs += 1
                print(f"No improvement in validation loss. {no_improve_epochs} epochs without improvement.")

            # exit early if no improvement in loss over last 'patience' epochs
            if no_improve_epochs >= patience:
                print("Early stopping due to no improvement in validation loss.")
                break

        # Load best model from disk
        #print()
        #if os.path.exists(model_path):
        #    model.load_state_dict(torch.load(model_path))
        #    print(f"Loaded best model from disk from epoch {best_epoch}.")
        #else:
        #    print("No saved model found for f{experimentName}.")

        #model.train(False)

        # Plot the training and validation losses across all training epochs.
        plot_losses(train_losses, val_losses, best_epoch, experimentName)

        # Generate AUROC/AUPRC for each intermediate model generated across training epochs.
        val_aurocs, val_auprcs, val_accs, test_aurocs, test_auprcs, test_accs = \
            print_all_evals(model, all_models, device, val_loader, test_loader, loss_func, print_detailed=False)

        # Find model with highest AUROC. Plot AUROC/AUPRC across all epochs.
        np_test_aurocs, test_auroc_idx = plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, val_accs, \
                                        test_aurocs, test_auprcs, test_accs, all_models, best_epoch, experimentName)

        ## AUROC / AUPRC - Model with Best Validation Loss
        best_model_val_loss = all_models[best_epoch]
    
        print(f'AUROC/AUPRC Plots - Best Model Based on Validation Loss')
        print(f'  Epoch with best Validation Loss:  {best_epoch:3}, {val_losses[best_epoch]:.4}')
        print(f'  Best Model Based on Validation Loss:')
        print(f'    {best_model_val_loss}')
        print()
        print(f'Generate Stats Based on Test Data')
        model.load_state_dict(torch.load(best_model_val_loss))
        #model.train(False)
        model.eval()
    
        best_model_val_test_predictions, best_model_val_test_labels, test_loss, \
            best_model_val_test_auroc, best_model_val_test_auprc, test_sensitivity, test_specificity, \
            best_model_val_test_threshold, best_model_val_accuracy = \
                eval_model(model, device, test_loader, loss_func, print_detailed=False)

        # y_test, y_pred
        display = RocCurveDisplay.from_predictions(
            best_model_val_test_labels,
            best_model_val_test_predictions,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_val_auroc.png'))
        plt.show()

        print(f'best_model_val_test_auroc: {best_model_val_test_auroc}')

        # Save best model in its entirety
        torch.save(model, os.path.join(VITAL_MODELS, f'{experimentName}_full.model'))

        best_model_val_test_predictions_binary = \
        (best_model_val_test_predictions > best_model_val_test_threshold).astype(int)

        # y_test, y_pred
        display = PrecisionRecallDisplay.from_predictions(
            best_model_val_test_labels, 
            best_model_val_test_predictions_binary,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_val_auprc.png'))
        plt.show()

        print(f'best_model_val_test_auprc: {best_model_val_test_auprc}')
        print()

        ## AUROC / AUPRC - Model with Best AUROC
        # Find model with highest AUROC
        best_model_auroc = all_models[test_auroc_idx]

        print(f'AUROC/AUPRC Plots - Best Model Based on Model AUROC')
        print(f'  Epoch with best model Test AUROC: {test_auroc_idx:3}, {np_test_aurocs[test_auroc_idx]:.4}')
        print(f'  Best Model Based on Model AUROC:')
        print(f'    {best_model_auroc}')
        print()
        print(f'Generate Stats Based on Test Data')
        model.load_state_dict(torch.load(best_model_auroc))
        #model.train(False)
        model.eval()
    
        best_model_auroc_test_predictions, best_model_auroc_test_labels, test_loss, \
            best_model_auroc_test_auroc, best_model_auroc_test_auprc, test_sensitivity, test_specificity, \
            best_model_auroc_test_threshold, best_model_auroc_accuracy = \
                eval_model(model, device, test_loader, loss_func, print_detailed=False)

        # y_test, y_pred
        display = RocCurveDisplay.from_predictions(
            best_model_auroc_test_labels,
            best_model_auroc_test_predictions,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_auroc_auroc.png'))
        plt.show()

        print(f'best_model_auroc_test_auroc: {best_model_auroc_test_auroc}')

        best_model_auroc_test_predictions_binary = \
            (best_model_auroc_test_predictions > best_model_auroc_test_threshold).astype(int)

        # y_test, y_pred
        display = PrecisionRecallDisplay.from_predictions(
            best_model_auroc_test_labels, 
            best_model_auroc_test_predictions_binary,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_auroc_auprc.png'))
        plt.show()

        print(f"best_model_auroc_test_auprc: {best_model_auroc_test_auprc}")
        print()
        
        time_delta = np.round(timer() - time_start, 3)
        print(f'Total Processing Time: {time_delta:.4f} sec')
        
    return (model, best_model_val_loss, best_model_auroc, experimentName)

Experiments¶

In [102]:
# When false, run only the first experiment below and then stop
SWEEP_ALL = True

Data tracks¶

Run experiments across the biosignal data track combinations:

  • ABP
  • ECG
  • EEG
  • ABP+ECG
  • ABP+EEG
  • ECG+EEG
  • ABP+ECG+EEG

The first experiment acts as a baseline.

In [103]:
ENABLE_EXPERIMENT = True
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

#MAX_EPOCHS=200
#PATIENCE=20

MAX_EPOCHS=80
PATIENCE=5

data_tracks = [
    # useAbp, useEeg, useEcg, experiement enable
    [True, False, False, True], # ABP only
    [False, False, True, SWEEP_ALL], # ECG only
    [False, True, False, SWEEP_ALL], # EEG only
    [True, False, True, SWEEP_ALL], # ABP + ECG
    [True, True, False, SWEEP_ALL], # ABP + EEG
    [False, True, True, SWEEP_ALL], # ECG + EEG
    [True, True, True, SWEEP_ALL] # ABP + ECG + EEG
]

if ENABLE_EXPERIMENT:
    for (useAbp, useEeg, useEcg, enable) in data_tracks:
        if enable:
            (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
                experimentNamePrefix=None, 
                useAbp=useAbp, 
                useEeg=useEeg, 
                useEcg=useEcg,
                nResiduals=12, 
                skip_connection=False,
                batch_size=128,
                learning_rate=1e-4,
                weight_decay=1e-1,
                pos_weight=None,
                max_epochs=MAX_EPOCHS,
                patience=PATIENCE,
                device=device
            )

            if DISPLAY_MODEL_PREDICTION:
                for case_id_to_check in my_cases_of_interest_idx:
                    preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                    printModelPrediction(case_id_to_check, preds, experimentName)

                    if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                        break
Experiment Setup
  name:              ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           False
  use_ecg:           False
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=32, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:50<00:00,  1.81it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
[2024-05-06 22:51:09.816098] Completed epoch 0 with training loss 0.50833446, validation loss 0.59527922
Validation loss improved to 0.59527922. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-06 22:52:02.891692] Completed epoch 1 with training loss 0.44851315, validation loss 0.62957680
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 22:52:55.920391] Completed epoch 2 with training loss 0.43924722, validation loss 0.65781873
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.96it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-06 22:53:49.160947] Completed epoch 3 with training loss 0.44168365, validation loss 0.55738705
Validation loss improved to 0.55738705. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-06 22:54:42.084810] Completed epoch 4 with training loss 0.43743092, validation loss 0.58563066
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.03it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-06 22:55:33.784200] Completed epoch 5 with training loss 0.43733558, validation loss 0.55916941
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.01it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-06 22:56:25.595688] Completed epoch 6 with training loss 0.43729022, validation loss 0.56144595
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.06it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-06 22:57:16.371146] Completed epoch 7 with training loss 0.43614823, validation loss 0.55167925
Validation loss improved to 0.55167925. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.07it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-06 22:58:06.827162] Completed epoch 8 with training loss 0.43605462, validation loss 0.54808503
Validation loss improved to 0.54808503. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-06 22:58:56.878773] Completed epoch 9 with training loss 0.43672553, validation loss 0.55009806
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-06 22:59:46.874922] Completed epoch 10 with training loss 0.43778062, validation loss 0.55915201
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.03it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:00:38.489569] Completed epoch 11 with training loss 0.43698364, validation loss 0.55134344
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-06 23:01:29.644503] Completed epoch 12 with training loss 0.43375269, validation loss 0.57082260
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-06 23:02:20.959691] Completed epoch 13 with training loss 0.43675187, validation loss 0.58579284
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:    8, 0.5481
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  3.00it/s]
Loss: 0.5999449118971825
AUROC: 0.8386067595393074
AUPRC: 0.7068158264162598
Sensitivity: 0.7789661319073083
Specificity: 0.7496473906911142
Threshold: 0.13
Accuracy:  0.7579585649317837

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.49it/s]
Loss: 0.5523457881477144
AUROC: 0.8340966330183958
AUPRC: 0.6646534488215912
Sensitivity: 0.7444519166106254
Specificity: 0.7875029811590747
Threshold: 0.14
Accuracy:  0.7762323943661972

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.6259194742888212
AUROC: 0.8425853993347563
AUPRC: 0.7232572497112336
Sensitivity: 0.7825311942959001
Specificity: 0.7461212976022567
Threshold: 0.1
Accuracy:  0.7564426478019202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.5783009578784307
AUROC: 0.8389218204164208
AUPRC: 0.6770395358717672
Sensitivity: 0.7491593813046402
Specificity: 0.7894109229668496
Threshold: 0.11
Accuracy:  0.7788732394366197

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.6538061611354351
AUROC: 0.8436388275017662
AUPRC: 0.728167249936243
Sensitivity: 0.750445632798574
Specificity: 0.7799717912552891
Threshold: 0.09
Accuracy:  0.7716018191005558

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.6023662010828654
AUROC: 0.8391159666469448
AUPRC: 0.6803042093287099
Sensitivity: 0.7700067249495629
Specificity: 0.7662771285475793
Threshold: 0.09
Accuracy:  0.7672535211267606

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.5600808002054691
AUROC: 0.8435860303859972
AUPRC: 0.7260232247082784
Sensitivity: 0.7540106951871658
Specificity: 0.770098730606488
Threshold: 0.15
Accuracy:  0.7655381505811015

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.5224893036815855
AUROC: 0.8403064254623624
AUPRC: 0.6809580045502535
Sensitivity: 0.773369199731002
Specificity: 0.7579298831385642
Threshold: 0.15
Accuracy:  0.7619718309859155

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.583466436713934
AUROC: 0.8434150682968404
AUPRC: 0.7268194437682877
Sensitivity: 0.7700534759358288
Specificity: 0.7517630465444288
Threshold: 0.12
Accuracy:  0.7569479535118747

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.57it/s]
Loss: 0.5407687127590179
AUROC: 0.8396988063014044
AUPRC: 0.6810927793932988
Sensitivity: 0.7545393409549428
Specificity: 0.7805866921058908
Threshold: 0.13
Accuracy:  0.7737676056338029

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5498393271118402
AUROC: 0.8436853392466104
AUPRC: 0.7267142674005587
Sensitivity: 0.7468805704099821
Specificity: 0.7842031029619182
Threshold: 0.16
Accuracy:  0.7736230419403739

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.57it/s]
Loss: 0.5134238024552663
AUROC: 0.8398345723353893
AUPRC: 0.680800214656267
Sensitivity: 0.7679892400806994
Specificity: 0.7643691867398045
Threshold: 0.16
Accuracy:  0.7653169014084507

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.552303571254015
AUROC: 0.8431359978277758
AUPRC: 0.7265926798868276
Sensitivity: 0.750445632798574
Specificity: 0.7778561354019746
Threshold: 0.15
Accuracy:  0.7700859019706923

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.56it/s]
Loss: 0.5175012396441565
AUROC: 0.8391094710481538
AUPRC: 0.6803837071866913
Sensitivity: 0.7726967047747142
Specificity: 0.7591223467684236
Threshold: 0.15
Accuracy:  0.7626760563380282

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.5531160943210125
AUROC: 0.8429977196674283
AUPRC: 0.7262186374431997
Sensitivity: 0.7736185383244206
Specificity: 0.7475317348377997
Threshold: 0.15
Accuracy:  0.7549267306720566

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.5126960270934635
AUROC: 0.8385335279553732
AUPRC: 0.679845595135447
Sensitivity: 0.7652992602555481
Specificity: 0.7703315048891008
Threshold: 0.16
Accuracy:  0.7690140845070422

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5573414973914623
AUROC: 0.842801616094572
AUPRC: 0.726491696023061
Sensitivity: 0.7557932263814616
Specificity: 0.7764456981664316
Threshold: 0.15
Accuracy:  0.7705912076806468

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.56it/s]
Loss: 0.5141099204619726
AUROC: 0.8381691328824693
AUPRC: 0.6793476440307518
Sensitivity: 0.7740416946872899
Specificity: 0.7553064631528739
Threshold: 0.15
Accuracy:  0.7602112676056338

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5422646980732679
AUROC: 0.8426055125217159
AUPRC: 0.7261121318547873
Sensitivity: 0.7664884135472371
Specificity: 0.7595204513399154
Threshold: 0.15
Accuracy:  0.7614957049014653

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.5090439829561445
AUROC: 0.8375930294045332
AUPRC: 0.6789286943004632
Sensitivity: 0.753866845998655
Specificity: 0.777009301216313
Threshold: 0.16
Accuracy:  0.7709507042253522

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.5662086848169565
AUROC: 0.8418839519395397
AUPRC: 0.7264239055951514
Sensitivity: 0.768270944741533
Specificity: 0.7566995768688294
Threshold: 0.14
Accuracy:  0.7599797877716018

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.5213217748536004
AUROC: 0.8365319212168871
AUPRC: 0.6784085647161456
Sensitivity: 0.753866845998655
Specificity: 0.7755783448604817
Threshold: 0.15
Accuracy:  0.7698943661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5439577214419842
AUROC: 0.8416966478859784
AUPRC: 0.7254137734279127
Sensitivity: 0.768270944741533
Specificity: 0.7574047954866009
Threshold: 0.15
Accuracy:  0.7604850934815564

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5116990917258792
AUROC: 0.8362438694779191
AUPRC: 0.6780539572771977
Sensitivity: 0.7552118359112306
Specificity: 0.7736704030527068
Threshold: 0.16
Accuracy:  0.768838028169014

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5735801178961992
AUROC: 0.8410844527578951
AUPRC: 0.7254396923685087
Sensitivity: 0.768270944741533
Specificity: 0.7588152327221439
Threshold: 0.13
Accuracy:  0.7614957049014653

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5283325072791841
AUROC: 0.8353589443834002
AUPRC: 0.6775398639354736
Sensitivity: 0.7511768661735037
Specificity: 0.7743858812306225
Threshold: 0.14
Accuracy:  0.7683098591549296

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.5887816082686186
AUROC: 0.8403377506920194
AUPRC: 0.725586337821946
Sensitivity: 0.7450980392156863
Specificity: 0.7884344146685472
Threshold: 0.12
Accuracy:  0.7761495704901465

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.56it/s]
Loss: 0.545028559366862
AUROC: 0.8345623113168888
AUPRC: 0.6765614283180954
Sensitivity: 0.7659717552118359
Specificity: 0.755067970426902
Threshold: 0.12
Accuracy:  0.7579225352112676


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:       8, 0.5481
  Epoch with best model Test AUROC:      3, 0.8403
  Epoch with best model Test Accuracy:   1, 0.7789

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:    8, 0.5481
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0008.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.5141099204619726
AUROC: 0.8381691328824693
AUPRC: 0.6793476440307518
Sensitivity: 0.7740416946872899
Specificity: 0.7553064631528739
Threshold: 0.15
Accuracy:  0.7602112676056338
best_model_val_test_auroc: 0.8381691328824693
best_model_val_test_auprc: 0.6793476440307518

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:   3, 0.8403
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_c4b802d5_0003.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5224893036815855
AUROC: 0.8403064254623624
AUPRC: 0.6809580045502535
Sensitivity: 0.773369199731002
Specificity: 0.7579298831385642
Threshold: 0.15
Accuracy:  0.7619718309859155
best_model_auroc_test_auroc: 0.8403064254623624
best_model_auroc_test_auprc: 0.6809580045502535

Total Processing Time: 1112.8460 sec
Experiment Setup
  name:              ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           False
  use_eeg:           False
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=32, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.03it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-06 23:11:20.321890] Completed epoch 0 with training loss 0.62600160, validation loss 0.62441230
Validation loss improved to 0.62441230. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.06it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-06 23:12:11.179101] Completed epoch 1 with training loss 0.59988081, validation loss 0.60588717
Validation loss improved to 0.60588717. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-06 23:13:02.503357] Completed epoch 2 with training loss 0.59706962, validation loss 0.60280043
Validation loss improved to 0.60280043. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.05it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-06 23:13:53.719446] Completed epoch 3 with training loss 0.59909952, validation loss 0.59833461
Validation loss improved to 0.59833461. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-06 23:14:45.112668] Completed epoch 4 with training loss 0.59590977, validation loss 0.60698223
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:15:36.496191] Completed epoch 5 with training loss 0.59581620, validation loss 0.60819048
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-06 23:16:27.870394] Completed epoch 6 with training loss 0.59743464, validation loss 0.60769677
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.02it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-06 23:17:19.584166] Completed epoch 7 with training loss 0.59522676, validation loss 0.60765004
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-06 23:18:10.833842] Completed epoch 8 with training loss 0.59435087, validation loss 0.60631752
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:    3, 0.5983
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.86it/s]
Loss: 0.6244106106460094
AUROC: 0.5692691873518223
AUPRC: 0.33468811109906094
Sensitivity: 0.3404634581105169
Specificity: 0.764456981664316
Threshold: 0.4
Accuracy:  0.6442647801920162

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.50it/s]
Loss: 0.614902647336324
AUROC: 0.5211669110669125
AUPRC: 0.2757204402556436
Sensitivity: 0.2972427706792199
Specificity: 0.7307417123777725
Threshold: 0.4
Accuracy:  0.6172535211267606

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
Loss: 0.6065739542245865
AUROC: 0.5539190544790811
AUPRC: 0.33399010482546154
Sensitivity: 0.9340463458110517
Specificity: 0.09097320169252468
Threshold: 0.35000000000000003
Accuracy:  0.3299646286003032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5916345742013719
AUROC: 0.5271067752944631
AUPRC: 0.28133675634447675
Sensitivity: 0.9307330195023538
Specificity: 0.08800381588361555
Threshold: 0.35000000000000003
Accuracy:  0.3086267605633803

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.6024129651486874
AUROC: 0.5563577783979344
AUPRC: 0.3293238525359121
Sensitivity: 0.20499108734402852
Specificity: 0.8624823695345557
Threshold: 0.34
Accuracy:  0.6760990399191511

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5867035594251421
AUROC: 0.525279026064352
AUPRC: 0.2817930976043608
Sensitivity: 0.2051109616677875
Specificity: 0.8428332935845456
Threshold: 0.34
Accuracy:  0.6758802816901408

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.6007625684142113
AUROC: 0.5459617748881832
AUPRC: 0.3176444454835705
Sensitivity: 0.37254901960784315
Specificity: 0.6861777150916785
Threshold: 0.33
Accuracy:  0.5972713491662456

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5835927837424808
AUROC: 0.5240330739851911
AUPRC: 0.2796923514523272
Sensitivity: 0.351714862138534
Specificity: 0.6804197471977105
Threshold: 0.33
Accuracy:  0.5943661971830986

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.6062760762870312
AUROC: 0.5317153531498507
AUPRC: 0.31152580545601316
Sensitivity: 0.9928698752228164
Specificity: 0.011283497884344146
Threshold: 0.35000000000000003
Accuracy:  0.28954017180394137

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.46it/s]
Loss: 0.5913719256718953
AUROC: 0.5365999726382925
AUPRC: 0.2836074140967395
Sensitivity: 0.9878950907868191
Specificity: 0.009778201764846173
Threshold: 0.35000000000000003
Accuracy:  0.2658450704225352

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
Loss: 0.6071056947112083
AUROC: 0.5173394527704658
AUPRC: 0.300693095325856
Sensitivity: 0.008912655971479501
Specificity: 0.9830747531734838
Threshold: 0.35000000000000003
Accuracy:  0.7069226882263769

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.46it/s]
Loss: 0.5900019261572096
AUROC: 0.5547674407228496
AUPRC: 0.29188086461099977
Sensitivity: 0.00605245460659045
Specificity: 0.989983305509182
Threshold: 0.35000000000000003
Accuracy:  0.7323943661971831

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.6048626266419888
AUROC: 0.5172797417466795
AUPRC: 0.296588839363907
Sensitivity: 0.11051693404634581
Specificity: 0.9146685472496474
Threshold: 0.35000000000000003
Accuracy:  0.686710459828196

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5908074471685621
AUROC: 0.5275382113622938
AUPRC: 0.2801748759574002
Sensitivity: 0.08406186953597848
Specificity: 0.9356069639875984
Threshold: 0.35000000000000003
Accuracy:  0.7126760563380282

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.6097158119082451
AUROC: 0.48308041000731616
AUPRC: 0.28932470167158175
Sensitivity: 0.9982174688057041
Specificity: 0.0007052186177715092
Threshold: 0.35000000000000003
Accuracy:  0.2834765032844871

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5926611191696591
AUROC: 0.5080399474514077
AUPRC: 0.27274157583709935
Sensitivity: 0.9979825151311366
Specificity: 0.0011924636298592892
Threshold: 0.35000000000000003
Accuracy:  0.2621478873239437

Intermediate Model:
  ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
Loss: 0.6089581921696663
AUROC: 0.4366755164689289
AUPRC: 0.24443548930308637
Sensitivity: 0.9964349376114082
Specificity: 0.013399153737658674
Threshold: 0.35000000000000003
Accuracy:  0.292066700353714

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5945242431428698
AUROC: 0.49172829599914414
AUPRC: 0.25134200318773164
Sensitivity: 1.0
Specificity: 0.0023849272597185785
Threshold: 0.35000000000000003
Accuracy:  0.263556338028169


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:       3, 0.5983
  Epoch with best model Test AUROC:      5, 0.5548
  Epoch with best model Test Accuracy:   5, 0.7324

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:    3, 0.5983
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0003.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5835927837424808
AUROC: 0.5240330739851911
AUPRC: 0.2796923514523272
Sensitivity: 0.351714862138534
Specificity: 0.6804197471977105
Threshold: 0.33
Accuracy:  0.5943661971830986
best_model_val_test_auroc: 0.5240330739851911
best_model_val_test_auprc: 0.2796923514523272

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:   5, 0.5548
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_74ce8669_0005.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.51it/s]
Loss: 0.5900019261572096
AUROC: 0.5547674407228496
AUPRC: 0.29188086461099977
Sensitivity: 0.00605245460659045
Specificity: 0.989983305509182
Threshold: 0.35000000000000003
Accuracy:  0.7323943661971831
best_model_auroc_test_auroc: 0.5547674407228496
best_model_auroc_test_auprc: 0.29188086461099977

Total Processing Time: 726.2280 sec
Experiment Setup
  name:              EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           False
  use_eeg:           True
  use_ecg:           False
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=32, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.31it/s]
[2024-05-06 23:25:10.208865] Completed epoch 0 with training loss 0.68267250, validation loss 0.65055537
Validation loss improved to 0.65055537. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:40<00:00,  2.29it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-06 23:25:56.705252] Completed epoch 1 with training loss 0.62103999, validation loss 0.62796497
Validation loss improved to 0.62796497. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:40<00:00,  2.30it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-06 23:26:43.124387] Completed epoch 2 with training loss 0.61489582, validation loss 0.61881405
Validation loss improved to 0.61881405. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:38<00:00,  2.37it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:27:28.301937] Completed epoch 3 with training loss 0.60927409, validation loss 0.61599141
Validation loss improved to 0.61599141. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.34it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:28:13.941594] Completed epoch 4 with training loss 0.60475147, validation loss 0.61307645
Validation loss improved to 0.61307645. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.31it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:29:00.032091] Completed epoch 5 with training loss 0.60373586, validation loss 0.61140704
Validation loss improved to 0.61140704. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.35it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-06 23:29:45.439909] Completed epoch 6 with training loss 0.60149646, validation loss 0.60909307
Validation loss improved to 0.60909307. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.36it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-06 23:30:30.629857] Completed epoch 7 with training loss 0.60069025, validation loss 0.60868412
Validation loss improved to 0.60868412. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:38<00:00,  2.36it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-06 23:31:15.713242] Completed epoch 8 with training loss 0.59905481, validation loss 0.60431945
Validation loss improved to 0.60431945. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:40<00:00,  2.28it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-06 23:32:02.203487] Completed epoch 9 with training loss 0.59836173, validation loss 0.60536647
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.30it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:32:48.521065] Completed epoch 10 with training loss 0.59854132, validation loss 0.60163778
Validation loss improved to 0.60163778. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.35it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-06 23:33:33.834417] Completed epoch 11 with training loss 0.59582889, validation loss 0.60614061
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:40<00:00,  2.27it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-06 23:34:20.842644] Completed epoch 12 with training loss 0.59468710, validation loss 0.60407883
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.32it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-06 23:35:06.740295] Completed epoch 13 with training loss 0.59459966, validation loss 0.60604942
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:38<00:00,  2.38it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-06 23:35:51.348535] Completed epoch 14 with training loss 0.59348524, validation loss 0.60150123
Validation loss improved to 0.60150123. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:38<00:00,  2.41it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-06 23:36:36.137373] Completed epoch 15 with training loss 0.59377939, validation loss 0.59637558
Validation loss improved to 0.59637558. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.30it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-06 23:37:22.190008] Completed epoch 16 with training loss 0.59297216, validation loss 0.60149026
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:39<00:00,  2.35it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-06 23:38:07.476842] Completed epoch 17 with training loss 0.59200430, validation loss 0.60102093
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:40<00:00,  2.29it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-06 23:38:53.931164] Completed epoch 18 with training loss 0.59300405, validation loss 0.60135794
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:41<00:00,  2.22it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-06 23:39:41.949222] Completed epoch 19 with training loss 0.59133208, validation loss 0.59950197
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:38<00:00,  2.37it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-06 23:40:26.844534] Completed epoch 20 with training loss 0.59223503, validation loss 0.59802228
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   15, 0.5964
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.95it/s]
Loss: 0.6502921395003796
AUROC: 0.5077008364571627
AUPRC: 0.2827325467525825
Sensitivity: 0.48128342245989303
Specificity: 0.5239774330042313
Threshold: 0.45
Accuracy:  0.5118746841839312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.50it/s]
Loss: 0.6511740035480923
AUROC: 0.4926527560344514
AUPRC: 0.27759939337738415
Sensitivity: 0.4586415601882986
Specificity: 0.5075125208681135
Threshold: 0.45
Accuracy:  0.49471830985915494

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
Loss: 0.6255926005542278
AUROC: 0.508284747416084
AUPRC: 0.2841945468781877
Sensitivity: 0.49019607843137253
Specificity: 0.5126939351198871
Threshold: 0.4
Accuracy:  0.5063163213744315

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.6218459420733982
AUROC: 0.49339285333370964
AUPRC: 0.2777467649149728
Sensitivity: 0.4613315400134499
Specificity: 0.49678034819937994
Threshold: 0.4
Accuracy:  0.4875

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
Loss: 0.6186214685440063
AUROC: 0.5104915411478093
AUPRC: 0.2861124328571292
Sensitivity: 0.5222816399286988
Specificity: 0.49717912552891397
Threshold: 0.38
Accuracy:  0.5042950985346134

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.6116940802998013
AUROC: 0.4958769788119982
AUPRC: 0.2791411955992929
Sensitivity: 0.5057162071284466
Specificity: 0.4629143811113761
Threshold: 0.38
Accuracy:  0.47411971830985916

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
Loss: 0.6157516688108444
AUROC: 0.5115424551664493
AUPRC: 0.28673393384695844
Sensitivity: 0.46880570409982175
Specificity: 0.535966149506347
Threshold: 0.38
Accuracy:  0.5169277412834765

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.607893623246087
AUROC: 0.4974224501687332
AUPRC: 0.27991388235868786
Sensitivity: 0.44317417619367855
Specificity: 0.5282613880276652
Threshold: 0.38
Accuracy:  0.5059859154929578

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
Loss: 0.6127794422209263
AUROC: 0.5149082712967223
AUPRC: 0.2890102723002931
Sensitivity: 0.4919786096256685
Specificity: 0.5218617771509168
Threshold: 0.37
Accuracy:  0.5133906013137949

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.6034559349219004
AUROC: 0.5000313552978665
AUPRC: 0.2812614775068336
Sensitivity: 0.4613315400134499
Specificity: 0.5089434772239446
Threshold: 0.37
Accuracy:  0.4964788732394366

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.6140436008572578
AUROC: 0.5186053264747366
AUPRC: 0.29190138863771287
Sensitivity: 0.46345811051693403
Specificity: 0.5528913963328632
Threshold: 0.37
Accuracy:  0.5275391611925214

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.6011722789870368
AUROC: 0.5027988813456186
AUPRC: 0.28250890078509616
Sensitivity: 0.42837928715534634
Specificity: 0.5521106606248509
Threshold: 0.37
Accuracy:  0.5197183098591549

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.69it/s]
Loss: 0.6094557531177998
AUROC: 0.522609107753885
AUPRC: 0.29495043897955375
Sensitivity: 0.49376114081996436
Specificity: 0.5253878702397743
Threshold: 0.36
Accuracy:  0.516422435573522

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.65it/s]
Loss: 0.5976414614253573
AUROC: 0.5060238258563645
AUPRC: 0.2837719108505569
Sensitivity: 0.4667114996637525
Specificity: 0.5170522299069878
Threshold: 0.36
Accuracy:  0.5038732394366198

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
Loss: 0.6088502109050751
AUROC: 0.527027094976983
AUPRC: 0.2987343872184681
Sensitivity: 0.5597147950089126
Specificity: 0.4929478138222849
Threshold: 0.36
Accuracy:  0.5118746841839312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5988986531893412
AUROC: 0.5099255957225921
AUPRC: 0.2852293520628535
Sensitivity: 0.5527908540685945
Specificity: 0.4385881230622466
Threshold: 0.36
Accuracy:  0.46848591549295776

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
Loss: 0.6073535047471523
AUROC: 0.5339705442376976
AUPRC: 0.30430170903985754
Sensitivity: 0.49732620320855614
Specificity: 0.5486600846262342
Threshold: 0.36
Accuracy:  0.5341081354219303

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.66it/s]
Loss: 0.5957185546557109
AUROC: 0.515909726253013
AUPRC: 0.2874754273981137
Sensitivity: 0.47679892400806995
Specificity: 0.5296923443834963
Threshold: 0.36
Accuracy:  0.5158450704225352

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
Loss: 0.6066821180284023
AUROC: 0.5398995346311367
AUPRC: 0.3097984484879813
Sensitivity: 0.5044563279857398
Specificity: 0.5578279266572638
Threshold: 0.36
Accuracy:  0.5426983324911572

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.67it/s]
Loss: 0.5946895228491889
AUROC: 0.5214629018710693
AUPRC: 0.29015482243859436
Sensitivity: 0.47747141896435774
Specificity: 0.5401860243262581
Threshold: 0.36
Accuracy:  0.5237676056338029

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.6016007885336876
AUROC: 0.5483753573233371
AUPRC: 0.3173452697152249
Sensitivity: 0.5686274509803921
Specificity: 0.49576868829337095
Threshold: 0.34
Accuracy:  0.516422435573522

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5880555066797468
AUROC: 0.5289015653751545
AUPRC: 0.29351854554092094
Sensitivity: 0.5709482178883658
Specificity: 0.4612449320295731
Threshold: 0.34
Accuracy:  0.48996478873239435

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
Loss: 0.6070766486227512
AUROC: 0.5565243407274436
AUPRC: 0.32401660274472754
Sensitivity: 0.5597147950089126
Specificity: 0.5183356840620592
Threshold: 0.36
Accuracy:  0.5300656897422941

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.67it/s]
Loss: 0.5946040047539605
AUROC: 0.5361595550017635
AUPRC: 0.29679800543748514
Sensitivity: 0.5622057834566241
Specificity: 0.4760314810398283
Threshold: 0.36
Accuracy:  0.49859154929577465

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
Loss: 0.6053192839026451
AUROC: 0.5655840743785654
AUPRC: 0.3317660350192104
Sensitivity: 0.5080213903743316
Specificity: 0.5952045133991537
Threshold: 0.36
Accuracy:  0.5704901465386559

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5926130188835992
AUROC: 0.5440622127602108
AUPRC: 0.3004035370096213
Sensitivity: 0.5097511768661735
Specificity: 0.5468638206534701
Threshold: 0.36
Accuracy:  0.5371478873239437

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
Loss: 0.6074720099568367
AUROC: 0.5737016309280476
AUPRC: 0.3379535670945291
Sensitivity: 0.6327985739750446
Specificity: 0.4590973201692525
Threshold: 0.36
Accuracy:  0.5083375442142496

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5949786987569597
AUROC: 0.550853000429351
AUPRC: 0.2970307985993017
Sensitivity: 0.4445191661062542
Specificity: 0.6203195802528023
Threshold: 0.37
Accuracy:  0.5742957746478873

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
Loss: 0.6026437766849995
AUROC: 0.5819725505280968
AUPRC: 0.345205405365647
Sensitivity: 0.6042780748663101
Specificity: 0.5204513399153737
Threshold: 0.35000000000000003
Accuracy:  0.5442142496210207

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5890091326501634
AUROC: 0.5593490832625099
AUPRC: 0.30717455766582924
Sensitivity: 0.624747814391392
Specificity: 0.4617219174815168
Threshold: 0.35000000000000003
Accuracy:  0.5044014084507042

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5973393768072128
AUROC: 0.5868663403302082
AUPRC: 0.3488716837729511
Sensitivity: 0.6060606060606061
Specificity: 0.5112834978843441
Threshold: 0.34
Accuracy:  0.5381505811015664

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5846603605482313
AUROC: 0.5652755553295906
AUPRC: 0.3053967327072643
Sensitivity: 0.628782784129119
Specificity: 0.4674457429048414
Threshold: 0.34
Accuracy:  0.5096830985915493

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.69it/s]
Loss: 0.6001610942184925
AUROC: 0.5908405803660097
AUPRC: 0.3515988968727485
Sensitivity: 0.6060606060606061
Specificity: 0.5084626234132581
Threshold: 0.35000000000000003
Accuracy:  0.5361293582617483

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.66it/s]
Loss: 0.5874246286021338
AUROC: 0.569840517813097
AUPRC: 0.3079933762961893
Sensitivity: 0.6119704102219233
Specificity: 0.4805628428332936
Threshold: 0.35000000000000003
Accuracy:  0.5149647887323944

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.77it/s]
Loss: 0.6005946658551693
AUROC: 0.5928399568572139
AUPRC: 0.35044070991655263
Sensitivity: 0.5008912655971479
Specificity: 0.6325811001410437
Threshold: 0.36
Accuracy:  0.5952501263264275

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.78it/s]
Loss: 0.5871566686365339
AUROC: 0.5736727607144902
AUPRC: 0.3108729787349037
Sensitivity: 0.4821788836583726
Specificity: 0.6255664202241832
Threshold: 0.36
Accuracy:  0.5880281690140845

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.80it/s]
Loss: 0.601145301014185
AUROC: 0.593259819634996
AUPRC: 0.34820456228591806
Sensitivity: 0.5668449197860963
Specificity: 0.5528913963328632
Threshold: 0.36
Accuracy:  0.5568468923698838

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.76it/s]
Loss: 0.5882413936985864
AUROC: 0.576730343315652
AUPRC: 0.31214506454674895
Sensitivity: 0.5427034297242771
Specificity: 0.5583114715001193
Threshold: 0.36
Accuracy:  0.5542253521126761

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.76it/s]
Loss: 0.5968304984271526
AUROC: 0.5940014934041318
AUPRC: 0.3468385267466053
Sensitivity: 0.5240641711229946
Specificity: 0.6086036671368125
Threshold: 0.36
Accuracy:  0.5846387064173825

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.75it/s]
Loss: 0.58515844212638
AUROC: 0.5786018776931675
AUPRC: 0.313366684219665
Sensitivity: 0.6072629455279085
Specificity: 0.5015502027188171
Threshold: 0.35000000000000003
Accuracy:  0.5292253521126761

Intermediate Model:
  ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.79it/s]
Loss: 0.598326213657856
AUROC: 0.5940580617424557
AUPRC: 0.3447537256246564
Sensitivity: 0.5757575757575758
Specificity: 0.5592383638928068
Threshold: 0.36
Accuracy:  0.5639211723092471

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.72it/s]
Loss: 0.5867899278799693
AUROC: 0.5800898509717175
AUPRC: 0.31292113011747696
Sensitivity: 0.550773369199731
Specificity: 0.5666587169091343
Threshold: 0.36
Accuracy:  0.5625


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      15, 0.5964
  Epoch with best model Test AUROC:     20, 0.5801
  Epoch with best model Test Accuracy:  17, 0.588

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   15, 0.5964
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0015.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.74it/s]
Loss: 0.5846603605482313
AUROC: 0.5652755553295906
AUPRC: 0.3053967327072643
Sensitivity: 0.628782784129119
Specificity: 0.4674457429048414
Threshold: 0.34
Accuracy:  0.5096830985915493
best_model_val_test_auroc: 0.5652755553295906
best_model_val_test_auprc: 0.3053967327072643

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  20, 0.5801
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_9a78731d_0020.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:16<00:00,  2.74it/s]
Loss: 0.5867899278799693
AUROC: 0.5800898509717175
AUPRC: 0.31292113011747696
Sensitivity: 0.550773369199731
Specificity: 0.5666587169091343
Threshold: 0.36
Accuracy:  0.5625
best_model_auroc_test_auroc: 0.5800898509717175
best_model_auroc_test_auprc: 0.31292113011747696

Total Processing Time: 1502.0170 sec
Experiment Setup
  name:              ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           False
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=64, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:52:06.496312] Completed epoch 0 with training loss 0.50932217, validation loss 0.66336685
Validation loss improved to 0.66336685. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:53:08.217355] Completed epoch 1 with training loss 0.44563124, validation loss 0.63747633
Validation loss improved to 0.63747633. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-06 23:54:09.850074] Completed epoch 2 with training loss 0.44022346, validation loss 0.62522590
Validation loss improved to 0.62522590. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:55:11.426400] Completed epoch 3 with training loss 0.43809348, validation loss 0.61598957
Validation loss improved to 0.61598957. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-06 23:56:13.064540] Completed epoch 4 with training loss 0.43918857, validation loss 0.57039440
Validation loss improved to 0.57039440. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:57:14.586659] Completed epoch 5 with training loss 0.44047263, validation loss 0.60301912
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:58:15.786079] Completed epoch 6 with training loss 0.43801460, validation loss 0.56775016
Validation loss improved to 0.56775016. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-06 23:59:17.207263] Completed epoch 7 with training loss 0.43590185, validation loss 0.58156061
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-07 00:00:18.723507] Completed epoch 8 with training loss 0.43600875, validation loss 0.59659964
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:01:20.330830] Completed epoch 9 with training loss 0.43488389, validation loss 0.61646330
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-07 00:02:21.794381] Completed epoch 10 with training loss 0.43644682, validation loss 0.56188554
Validation loss improved to 0.56188554. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:03:23.334580] Completed epoch 11 with training loss 0.43534774, validation loss 0.61564755
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:04:24.768175] Completed epoch 12 with training loss 0.43481636, validation loss 0.55373085
Validation loss improved to 0.55373085. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-07 00:05:26.272582] Completed epoch 13 with training loss 0.43672857, validation loss 0.54501247
Validation loss improved to 0.54501247. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:06:27.696530] Completed epoch 14 with training loss 0.43419001, validation loss 0.56584013
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:07:29.277823] Completed epoch 15 with training loss 0.43405101, validation loss 0.52758998
Validation loss improved to 0.52758998. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-07 00:08:30.667268] Completed epoch 16 with training loss 0.43442178, validation loss 0.57197040
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-07 00:09:32.140281] Completed epoch 17 with training loss 0.43543360, validation loss 0.56076568
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-07 00:10:33.516339] Completed epoch 18 with training loss 0.43559518, validation loss 0.58055252
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-07 00:11:34.721696] Completed epoch 19 with training loss 0.43409327, validation loss 0.51616561
Validation loss improved to 0.51616561. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-07 00:12:35.910656] Completed epoch 20 with training loss 0.43698367, validation loss 0.54630846
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-07 00:13:36.959205] Completed epoch 21 with training loss 0.43207112, validation loss 0.53785837
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-07 00:14:38.391837] Completed epoch 22 with training loss 0.43021354, validation loss 0.52767825
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.49it/s]
[2024-05-07 00:15:39.958006] Completed epoch 23 with training loss 0.43257162, validation loss 0.53902280
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-07 00:16:41.679137] Completed epoch 24 with training loss 0.42944759, validation loss 0.53623521
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   19, 0.5162
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.80it/s]
Loss: 0.6667963452637196
AUROC: 0.8394062587209521
AUPRC: 0.7147522139308432
Sensitivity: 0.7843137254901961
Specificity: 0.7404795486600846
Threshold: 0.09
Accuracy:  0.7529055078322385

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.39it/s]
Loss: 0.609530649582545
AUROC: 0.833443464473325
AUPRC: 0.6703043061336393
Sensitivity: 0.7377269670477471
Specificity: 0.7925113284044837
Threshold: 0.1
Accuracy:  0.778169014084507

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.637725967913866
AUROC: 0.8425426588124673
AUPRC: 0.7238938718199651
Sensitivity: 0.7771836007130125
Specificity: 0.7489421720733427
Threshold: 0.1
Accuracy:  0.7569479535118747

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5840680446889666
AUROC: 0.8383915870929084
AUPRC: 0.678505220563675
Sensitivity: 0.7397444519166106
Specificity: 0.7917958502265681
Threshold: 0.11
Accuracy:  0.778169014084507

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.6199027970433235
AUROC: 0.8427613897206531
AUPRC: 0.7264488558666128
Sensitivity: 0.7540106951871658
Specificity: 0.7743300423131171
Threshold: 0.11
Accuracy:  0.7685699848408287

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5732372015714645
AUROC: 0.8384730627518148
AUPRC: 0.6803792525774277
Sensitivity: 0.7720242098184263
Specificity: 0.7581683758645361
Threshold: 0.11
Accuracy:  0.7617957746478873

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.6228217706084251
AUROC: 0.8428430995426766
AUPRC: 0.7273518652630054
Sensitivity: 0.7878787878787878
Specificity: 0.7376586741889986
Threshold: 0.1
Accuracy:  0.7518948964123294

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5731959654225244
AUROC: 0.8385506891669933
AUPRC: 0.6808252946972653
Sensitivity: 0.7545393409549428
Specificity: 0.7767708084903411
Threshold: 0.11
Accuracy:  0.7709507042253522

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.5695207808166742
AUROC: 0.843234049614204
AUPRC: 0.7275589544375128
Sensitivity: 0.7629233511586453
Specificity: 0.764456981664316
Threshold: 0.14
Accuracy:  0.764022233451238

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.48it/s]
Loss: 0.5315346724457211
AUROC: 0.8388469205488829
AUPRC: 0.680363177760595
Sensitivity: 0.7800941492938803
Specificity: 0.7495826377295493
Threshold: 0.14
Accuracy:  0.7575704225352112

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.6005638614296913
AUROC: 0.8431699388307701
AUPRC: 0.728782390499617
Sensitivity: 0.7433155080213903
Specificity: 0.7863187588152327
Threshold: 0.12
Accuracy:  0.7741283476503285

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.5554769439829721
AUROC: 0.8382728218853884
AUPRC: 0.6800775536152074
Sensitivity: 0.7652992602555481
Specificity: 0.7660386358216075
Threshold: 0.12
Accuracy:  0.7658450704225352

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.5636609755456448
AUROC: 0.8431045709731515
AUPRC: 0.7276158639911978
Sensitivity: 0.7611408199643493
Specificity: 0.771509167842031
Threshold: 0.14
Accuracy:  0.7685699848408287

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5273131900363498
AUROC: 0.8384338485813372
AUPRC: 0.6797127747747301
Sensitivity: 0.7753866845998655
Specificity: 0.7536370140710709
Threshold: 0.14
Accuracy:  0.759330985915493

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.5788453631103039
AUROC: 0.8427727033883179
AUPRC: 0.7274250721687421
Sensitivity: 0.7557932263814616
Specificity: 0.7764456981664316
Threshold: 0.13
Accuracy:  0.7705912076806468

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.56it/s]
Loss: 0.5359184119436476
AUROC: 0.8376100302309979
AUPRC: 0.6790914437158786
Sensitivity: 0.7747141896435776
Specificity: 0.7553064631528739
Threshold: 0.13
Accuracy:  0.7603873239436619

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.5966633278876543
AUROC: 0.8421529658151246
AUPRC: 0.7271862364413374
Sensitivity: 0.7522281639928698
Specificity: 0.7785613540197461
Threshold: 0.12
Accuracy:  0.7710965133906014

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.5492379181914859
AUROC: 0.8369317614091183
AUPRC: 0.6786302571507296
Sensitivity: 0.7740416946872899
Specificity: 0.7538755067970427
Threshold: 0.12
Accuracy:  0.7591549295774648

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.6266313195228577
AUROC: 0.8414477471973532
AUPRC: 0.7269229744223902
Sensitivity: 0.7629233511586453
Specificity: 0.7679830747531735
Threshold: 0.1
Accuracy:  0.7665487620010106

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5696854445669386
AUROC: 0.8359289532254337
AUPRC: 0.6781664139989201
Sensitivity: 0.7868190988567586
Specificity: 0.7381349868829
Threshold: 0.1
Accuracy:  0.7508802816901409

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.5677480343729258
AUROC: 0.8416287658799897
AUPRC: 0.725578075550334
Sensitivity: 0.7522281639928698
Specificity: 0.7806770098730607
Threshold: 0.14
Accuracy:  0.7726124305204649

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.57it/s]
Loss: 0.5238323238160875
AUROC: 0.8361490818511206
AUPRC: 0.6779359512486719
Sensitivity: 0.7726967047747142
Specificity: 0.7519675649892679
Threshold: 0.14
Accuracy:  0.7573943661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.6090561412274837
AUROC: 0.8404515159057597
AUPRC: 0.7257819567534999
Sensitivity: 0.7415329768270945
Specificity: 0.7905500705218618
Threshold: 0.11
Accuracy:  0.7766548762001011

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.562081884013282
AUROC: 0.835037532532124
AUPRC: 0.6769945899371775
Sensitivity: 0.7531943510423672
Specificity: 0.7643691867398045
Threshold: 0.11
Accuracy:  0.761443661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.5543177351355553
AUROC: 0.8410831956837101
AUPRC: 0.725667719709847
Sensitivity: 0.7736185383244206
Specificity: 0.7425952045133991
Threshold: 0.14
Accuracy:  0.751389590702375

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.56it/s]
Loss: 0.5148674938413832
AUROC: 0.835424782489662
AUPRC: 0.6771117571323201
Sensitivity: 0.7585743106926698
Specificity: 0.7619842594800859
Threshold: 0.15
Accuracy:  0.7610915492957746

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.5412606671452522
AUROC: 0.8411611342831785
AUPRC: 0.7264344058631771
Sensitivity: 0.7629233511586453
Specificity: 0.7729196050775741
Threshold: 0.15
Accuracy:  0.7700859019706923

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5036982390615675
AUROC: 0.8350026487608403
AUPRC: 0.6767432355127976
Sensitivity: 0.7444519166106254
Specificity: 0.7758168375864536
Threshold: 0.16
Accuracy:  0.7676056338028169

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
Loss: 0.5693398024886847
AUROC: 0.8404295171075226
AUPRC: 0.7258715057938755
Sensitivity: 0.7611408199643493
Specificity: 0.7757404795486601
Threshold: 0.13
Accuracy:  0.7716018191005558

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.39it/s]
Loss: 0.5254861301845974
AUROC: 0.8346143562997925
AUPRC: 0.6757330078217881
Sensitivity: 0.7437794216543376
Specificity: 0.7777247793942285
Threshold: 0.14
Accuracy:  0.768838028169014

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.5269688758999109
AUROC: 0.8398474917598787
AUPRC: 0.7248811324272757
Sensitivity: 0.7629233511586453
Specificity: 0.765867418899859
Threshold: 0.17
Accuracy:  0.7650328448711471

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.4953606718116336
AUROC: 0.8337644753617126
AUPRC: 0.6754268928957853
Sensitivity: 0.7484868863483524
Specificity: 0.7677080849034105
Threshold: 0.18
Accuracy:  0.7626760563380282

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.5726594626903534
AUROC: 0.8396840721158318
AUPRC: 0.724479160266647
Sensitivity: 0.7754010695187166
Specificity: 0.7440056417489421
Threshold: 0.12
Accuracy:  0.7529055078322385

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.5310017946693633
AUROC: 0.8346066578123369
AUPRC: 0.6747299935823237
Sensitivity: 0.7632817753866846
Specificity: 0.7553064631528739
Threshold: 0.13
Accuracy:  0.7573943661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
Loss: 0.563207158818841
AUROC: 0.839676529670722
AUPRC: 0.7230144488320356
Sensitivity: 0.7754010695187166
Specificity: 0.7447108603667136
Threshold: 0.13
Accuracy:  0.7534108135421931

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.518364037738906
AUROC: 0.8347011246688247
AUPRC: 0.6746053627234561
Sensitivity: 0.7673167451244116
Specificity: 0.7543524922489864
Threshold: 0.14
Accuracy:  0.7577464788732394

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5743361599743366
AUROC: 0.8404458590719274
AUPRC: 0.7259619028436157
Sensitivity: 0.7486631016042781
Specificity: 0.7820874471086037
Threshold: 0.12
Accuracy:  0.7726124305204649

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5305962747997708
AUROC: 0.8355105885477622
AUPRC: 0.6748937337404561
Sensitivity: 0.777404169468729
Specificity: 0.7486286668256619
Threshold: 0.12
Accuracy:  0.7561619718309859

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5120771992951632
AUROC: 0.8406231065320089
AUPRC: 0.7277108393388817
Sensitivity: 0.7664884135472371
Specificity: 0.7510578279266573
Threshold: 0.17
Accuracy:  0.7554320363820111

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.48307656976911756
AUROC: 0.834361027946953
AUPRC: 0.6746737603403997
Sensitivity: 0.7673167451244116
Specificity: 0.7498211304555211
Threshold: 0.18
Accuracy:  0.7544014084507042

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5470702853053808
AUROC: 0.8411183937608895
AUPRC: 0.7267390359362808
Sensitivity: 0.7736185383244206
Specificity: 0.7524682651622003
Threshold: 0.14
Accuracy:  0.7584638706417383

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5062093168497086
AUROC: 0.8360210143045916
AUPRC: 0.6743567957255647
Sensitivity: 0.7686617350369872
Specificity: 0.7562604340567612
Threshold: 0.15
Accuracy:  0.7595070422535212

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0021.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.543586015701294
AUROC: 0.8423478123137959
AUPRC: 0.7270658272285664
Sensitivity: 0.7647058823529411
Specificity: 0.7574047954866009
Threshold: 0.14
Accuracy:  0.7594744820616472

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.55it/s]
Loss: 0.5036684725019667
AUROC: 0.8384113144670137
AUPRC: 0.6757409127364384
Sensitivity: 0.769334229993275
Specificity: 0.7569759122346769
Threshold: 0.15
Accuracy:  0.7602112676056338

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0022.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.5262163132429123
AUROC: 0.8429788635546537
AUPRC: 0.730919351055884
Sensitivity: 0.7575757575757576
Specificity: 0.763046544428773
Threshold: 0.16
Accuracy:  0.7614957049014653

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.49086920188532934
AUROC: 0.8382650432053551
AUPRC: 0.6761761531188036
Sensitivity: 0.7572293207800942
Specificity: 0.7629382303839732
Threshold: 0.17
Accuracy:  0.761443661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0023.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5372203066945076
AUROC: 0.841399978378324
AUPRC: 0.7274537580814905
Sensitivity: 0.7664884135472371
Specificity: 0.7588152327221439
Threshold: 0.14
Accuracy:  0.7609903991915109

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.51it/s]
Loss: 0.4982596794764201
AUROC: 0.8359319203508071
AUPRC: 0.6738556411032152
Sensitivity: 0.769334229993275
Specificity: 0.753398521345099
Threshold: 0.15
Accuracy:  0.7575704225352112

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0024.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.54166722856462
AUROC: 0.8416702493280938
AUPRC: 0.7278130032612363
Sensitivity: 0.7593582887700535
Specificity: 0.767277856135402
Threshold: 0.14
Accuracy:  0.7650328448711471

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.4993602524201075
AUROC: 0.8370592676076036
AUPRC: 0.6737008220059966
Sensitivity: 0.7639542703429725
Specificity: 0.7612687813021702
Threshold: 0.15
Accuracy:  0.7619718309859155


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      19, 0.5162
  Epoch with best model Test AUROC:      4, 0.8388
  Epoch with best model Test Accuracy:   0, 0.7782

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   19, 0.5162
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0019.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.54it/s]
Loss: 0.48307656976911756
AUROC: 0.834361027946953
AUPRC: 0.6746737603403997
Sensitivity: 0.7673167451244116
Specificity: 0.7498211304555211
Threshold: 0.18
Accuracy:  0.7544014084507042
best_model_val_test_auroc: 0.834361027946953
best_model_val_test_auprc: 0.6746737603403997

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:   4, 0.8388
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d852b97b_0004.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5315346724457211
AUROC: 0.8388469205488829
AUPRC: 0.680363177760595
Sensitivity: 0.7800941492938803
Specificity: 0.7495826377295493
Threshold: 0.14
Accuracy:  0.7575704225352112
best_model_auroc_test_auroc: 0.8388469205488829
best_model_auroc_test_auprc: 0.680363177760595

Total Processing Time: 2200.8950 sec
Experiment Setup
  name:              ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           True
  use_ecg:           False
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=64, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:30:24.144761] Completed epoch 0 with training loss 0.52068061, validation loss 0.58198005
Validation loss improved to 0.58198005. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:31:16.784101] Completed epoch 1 with training loss 0.45249948, validation loss 0.62701482
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:32:09.127281] Completed epoch 2 with training loss 0.44580990, validation loss 0.61036026
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:33:01.604759] Completed epoch 3 with training loss 0.43858191, validation loss 0.62131929
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:33:53.675457] Completed epoch 4 with training loss 0.43997571, validation loss 0.63153160
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:34:46.033100] Completed epoch 5 with training loss 0.44057277, validation loss 0.58019865
Validation loss improved to 0.58019865. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.96it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.32it/s]
[2024-05-07 00:35:39.959407] Completed epoch 6 with training loss 0.44102794, validation loss 0.61199272
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:47<00:00,  1.94it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-07 00:36:33.820279] Completed epoch 7 with training loss 0.43785876, validation loss 0.56180578
Validation loss improved to 0.56180578. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:48<00:00,  1.89it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:37:28.569169] Completed epoch 8 with training loss 0.43513718, validation loss 0.60047787
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:38:21.074800] Completed epoch 9 with training loss 0.44184694, validation loss 0.56146789
Validation loss improved to 0.56146789. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:39:13.460683] Completed epoch 10 with training loss 0.44001210, validation loss 0.53463203
Validation loss improved to 0.53463203. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:40:05.768453] Completed epoch 11 with training loss 0.44025645, validation loss 0.56098199
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.67it/s]
[2024-05-07 00:40:58.550406] Completed epoch 12 with training loss 0.43574378, validation loss 0.56535774
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 00:41:50.646615] Completed epoch 13 with training loss 0.43772337, validation loss 0.54831302
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 00:42:42.872286] Completed epoch 14 with training loss 0.43581170, validation loss 0.56130898
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 00:43:34.971052] Completed epoch 15 with training loss 0.43844551, validation loss 0.52837235
Validation loss improved to 0.52837235. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
[2024-05-07 00:44:27.256231] Completed epoch 16 with training loss 0.43602449, validation loss 0.56899852
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.68it/s]
[2024-05-07 00:45:19.572024] Completed epoch 17 with training loss 0.43746549, validation loss 0.55189663
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 00:46:11.804226] Completed epoch 18 with training loss 0.43064922, validation loss 0.55654401
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 00:47:04.398932] Completed epoch 19 with training loss 0.43274933, validation loss 0.55621636
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 00:47:56.785848] Completed epoch 20 with training loss 0.43140963, validation loss 0.57125264
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   15, 0.5284
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  3.00it/s]
Loss: 0.5791442915797234
AUROC: 0.8378914843280562
AUPRC: 0.7062211410509937
Sensitivity: 0.7397504456327986
Specificity: 0.7884344146685472
Threshold: 0.16
Accuracy:  0.774633653360283

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5410450402233336
AUROC: 0.831976180879812
AUPRC: 0.6643935406337405
Sensitivity: 0.7491593813046402
Specificity: 0.7741473885046506
Threshold: 0.16
Accuracy:  0.7676056338028169

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.6267427653074265
AUROC: 0.8415231716484518
AUPRC: 0.7213811095427984
Sensitivity: 0.7736185383244206
Specificity: 0.7531734837799718
Threshold: 0.11
Accuracy:  0.7589691763516928

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.5806676308314006
AUROC: 0.8370650414731955
AUPRC: 0.6786147058603456
Sensitivity: 0.7841291190316073
Specificity: 0.7393274505127594
Threshold: 0.11
Accuracy:  0.7510563380281691

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.6137485504150391
AUROC: 0.8425250597738775
AUPRC: 0.7254314839171702
Sensitivity: 0.7522281639928698
Specificity: 0.7764456981664316
Threshold: 0.12
Accuracy:  0.7695805962607377

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5680771364106072
AUROC: 0.8382475612234244
AUPRC: 0.6801487397376591
Sensitivity: 0.7726967047747142
Specificity: 0.760076317672311
Threshold: 0.12
Accuracy:  0.7633802816901408

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.6193438172340393
AUROC: 0.8422422180822579
AUPRC: 0.726538094215593
Sensitivity: 0.7950089126559715
Specificity: 0.7334273624823695
Threshold: 0.11
Accuracy:  0.7508842849924204

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.53it/s]
Loss: 0.5717183364762201
AUROC: 0.8372682494649952
AUPRC: 0.6798278256164236
Sensitivity: 0.7505043712172159
Specificity: 0.7779632721202003
Threshold: 0.12
Accuracy:  0.770774647887324

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.6195756830275059
AUROC: 0.8426708803793347
AUPRC: 0.7272945658516003
Sensitivity: 0.750445632798574
Specificity: 0.7827926657263752
Threshold: 0.11
Accuracy:  0.7736230419403739

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5766209456655714
AUROC: 0.837670816204867
AUPRC: 0.6798836973043121
Sensitivity: 0.7700067249495629
Specificity: 0.7638922012878607
Threshold: 0.11
Accuracy:  0.7654929577464789

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5840028487145901
AUROC: 0.8428518990619711
AUPRC: 0.7253769002847423
Sensitivity: 0.7540106951871658
Specificity: 0.7820874471086037
Threshold: 0.14
Accuracy:  0.7741283476503285

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.5420037849081887
AUROC: 0.8379439521243897
AUPRC: 0.6797683775305233
Sensitivity: 0.7706792199058508
Specificity: 0.7622227522060577
Threshold: 0.14
Accuracy:  0.7644366197183099

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.6081661507487297
AUROC: 0.8421102252928355
AUPRC: 0.7251232690460966
Sensitivity: 0.7843137254901961
Specificity: 0.7397743300423131
Threshold: 0.11
Accuracy:  0.752400202122284

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.57it/s]
Loss: 0.5639757123258379
AUROC: 0.8368109111945791
AUPRC: 0.6792121595575631
Sensitivity: 0.7545393409549428
Specificity: 0.7782017648461722
Threshold: 0.12
Accuracy:  0.7720070422535211

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5641562640666962
AUROC: 0.8424075233375824
AUPRC: 0.7233682740195618
Sensitivity: 0.7754010695187166
Specificity: 0.7433004231311706
Threshold: 0.15
Accuracy:  0.752400202122284

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.5234807180033789
AUROC: 0.8372740233305871
AUPRC: 0.6792912571734313
Sensitivity: 0.7579018157363819
Specificity: 0.7734319103267351
Threshold: 0.16
Accuracy:  0.7693661971830986

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.6027109436690807
AUROC: 0.8411850186926931
AUPRC: 0.722755024676271
Sensitivity: 0.7629233511586453
Specificity: 0.7750352609308886
Threshold: 0.12
Accuracy:  0.7716018191005558

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5524588561720318
AUROC: 0.8360093863808304
AUPRC: 0.6785706176070595
Sensitivity: 0.7807666442501682
Specificity: 0.7417123777724779
Threshold: 0.12
Accuracy:  0.7519366197183098

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.566789859905839
AUROC: 0.8414200915652836
AUPRC: 0.7226730724851282
Sensitivity: 0.7789661319073083
Specificity: 0.7411847672778561
Threshold: 0.14
Accuracy:  0.7518948964123294

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.5223493983348211
AUROC: 0.8360392180197214
AUPRC: 0.6780056010676594
Sensitivity: 0.7599193006052455
Specificity: 0.7672310994514667
Threshold: 0.15
Accuracy:  0.7653169014084507

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5379810910671949
AUROC: 0.8412755280340114
AUPRC: 0.7202734173643552
Sensitivity: 0.7575757575757576
Specificity: 0.7785613540197461
Threshold: 0.19
Accuracy:  0.7726124305204649

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.60it/s]
Loss: 0.5008054342534807
AUROC: 0.8356906208846171
AUPRC: 0.6768975200025196
Sensitivity: 0.7760591795561533
Specificity: 0.7486286668256619
Threshold: 0.19
Accuracy:  0.7558098591549296

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5644967351108789
AUROC: 0.8401869017898222
AUPRC: 0.7199684940318948
Sensitivity: 0.7450980392156863
Specificity: 0.7884344146685472
Threshold: 0.15
Accuracy:  0.7761495704901465

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5231000188324186
AUROC: 0.834385326297985
AUPRC: 0.6766095010563881
Sensitivity: 0.7639542703429725
Specificity: 0.7581683758645361
Threshold: 0.15
Accuracy:  0.7596830985915493

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5637774839997292
AUROC: 0.8392415820027203
AUPRC: 0.7193651699914765
Sensitivity: 0.7807486631016043
Specificity: 0.7397743300423131
Threshold: 0.14
Accuracy:  0.751389590702375

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.62it/s]
Loss: 0.5258344935046302
AUROC: 0.8335791503147318
AUPRC: 0.6757793409639506
Sensitivity: 0.7572293207800942
Specificity: 0.7615072740281421
Threshold: 0.15
Accuracy:  0.7603873239436619

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5536871701478958
AUROC: 0.8400813075582844
AUPRC: 0.7219048731334055
Sensitivity: 0.7468805704099821
Specificity: 0.7898448519040903
Threshold: 0.16
Accuracy:  0.7776654876200101

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.5140576442082723
AUROC: 0.8341978360514073
AUPRC: 0.6760530336005921
Sensitivity: 0.7639542703429725
Specificity: 0.7543524922489864
Threshold: 0.16
Accuracy:  0.7568661971830986

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5629174541682005
AUROC: 0.840048623629475
AUPRC: 0.7237288174804735
Sensitivity: 0.7754010695187166
Specificity: 0.7383638928067701
Threshold: 0.13
Accuracy:  0.7488630621526023

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5206856459379197
AUROC: 0.8341872506311556
AUPRC: 0.6761682575411294
Sensitivity: 0.7673167451244116
Specificity: 0.7502981159074649
Threshold: 0.14
Accuracy:  0.7547535211267605

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5261003170162439
AUROC: 0.8410781673869702
AUPRC: 0.7247664996130707
Sensitivity: 0.7664884135472371
Specificity: 0.7595204513399154
Threshold: 0.19
Accuracy:  0.7614957049014653

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.63it/s]
Loss: 0.49257934358384875
AUROC: 0.835125984945287
AUPRC: 0.6764238290972006
Sensitivity: 0.7552118359112306
Specificity: 0.7662771285475793
Threshold: 0.2
Accuracy:  0.7633802816901408

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
Loss: 0.5647752769291401
AUROC: 0.8402308993862962
AUPRC: 0.7247139726007192
Sensitivity: 0.7718360071301248
Specificity: 0.7475317348377997
Threshold: 0.13
Accuracy:  0.7544214249621021

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.58it/s]
Loss: 0.522202217578888
AUROC: 0.8344666415717361
AUPRC: 0.6759293765506459
Sensitivity: 0.7612642905178211
Specificity: 0.7555449558788457
Threshold: 0.14
Accuracy:  0.7570422535211268

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5546378064900637
AUROC: 0.8392252400383157
AUPRC: 0.7226870899832073
Sensitivity: 0.7629233511586453
Specificity: 0.7665726375176305
Threshold: 0.15
Accuracy:  0.7655381505811015

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.64it/s]
Loss: 0.5157867093880971
AUROC: 0.8334385727260873
AUPRC: 0.6747118754133847
Sensitivity: 0.7478143913920645
Specificity: 0.768423563081326
Threshold: 0.16
Accuracy:  0.7630281690140845

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5453200582414865
AUROC: 0.8404232317365978
AUPRC: 0.7249102794078968
Sensitivity: 0.7736185383244206
Specificity: 0.7468265162200282
Threshold: 0.13
Accuracy:  0.7544214249621021

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5115612520111932
AUROC: 0.8343007231285497
AUPRC: 0.6754417987587273
Sensitivity: 0.7760591795561533
Specificity: 0.7471977104698306
Threshold: 0.14
Accuracy:  0.7547535211267605

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.55101328343153
AUROC: 0.8408154388823101
AUPRC: 0.7267650887082792
Sensitivity: 0.768270944741533
Specificity: 0.7496473906911142
Threshold: 0.13
Accuracy:  0.7549267306720566

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.62it/s]
Loss: 0.513835506969028
AUROC: 0.8343477961716386
AUPRC: 0.6743141204050472
Sensitivity: 0.7700067249495629
Specificity: 0.7545909849749582
Threshold: 0.14
Accuracy:  0.7586267605633803

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
Loss: 0.5722823701798916
AUROC: 0.8415294570193766
AUPRC: 0.7263456046865526
Sensitivity: 0.7700534759358288
Specificity: 0.7588152327221439
Threshold: 0.12
Accuracy:  0.7620010106114199

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.5251613424883949
AUROC: 0.8356448309227712
AUPRC: 0.6752392697503189
Sensitivity: 0.7666442501681238
Specificity: 0.7595993322203672
Threshold: 0.13
Accuracy:  0.761443661971831


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      15, 0.5284
  Epoch with best model Test AUROC:      2, 0.8382
  Epoch with best model Test Accuracy:   6, 0.772

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   15, 0.5284
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0015.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.59it/s]
Loss: 0.49257934358384875
AUROC: 0.835125984945287
AUPRC: 0.6764238290972006
Sensitivity: 0.7552118359112306
Specificity: 0.7662771285475793
Threshold: 0.2
Accuracy:  0.7633802816901408
best_model_val_test_auroc: 0.835125984945287
best_model_val_test_auprc: 0.6764238290972006

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:   2, 0.8382
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_3a4d4f71_0002.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.60it/s]
Loss: 0.5680771364106072
AUROC: 0.8382475612234244
AUPRC: 0.6801487397376591
Sensitivity: 0.7726967047747142
Specificity: 0.760076317672311
Threshold: 0.12
Accuracy:  0.7633802816901408
best_model_auroc_test_auroc: 0.8382475612234244
best_model_auroc_test_auprc: 0.6801487397376591

Total Processing Time: 1651.9060 sec
Experiment Setup
  name:              EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           False
  use_eeg:           True
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=64, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-07 00:59:41.811304] Completed epoch 0 with training loss 0.60534513, validation loss 0.60338658
Validation loss improved to 0.60338658. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-07 01:00:34.083009] Completed epoch 1 with training loss 0.60149825, validation loss 0.60133243
Validation loss improved to 0.60133243. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-07 01:01:26.836576] Completed epoch 2 with training loss 0.60102445, validation loss 0.59851533
Validation loss improved to 0.59851533. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-07 01:02:19.569057] Completed epoch 3 with training loss 0.59785384, validation loss 0.60239792
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-07 01:03:12.281285] Completed epoch 4 with training loss 0.59849405, validation loss 0.60128987
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 01:04:04.741913] Completed epoch 5 with training loss 0.59796858, validation loss 0.59743512
Validation loss improved to 0.59743512. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:47<00:00,  1.96it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 01:04:57.918325] Completed epoch 6 with training loss 0.59848070, validation loss 0.60133791
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-07 01:05:50.245753] Completed epoch 7 with training loss 0.59629542, validation loss 0.59909576
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
[2024-05-07 01:06:42.559371] Completed epoch 8 with training loss 0.59764367, validation loss 0.60046417
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-07 01:07:34.944109] Completed epoch 9 with training loss 0.59877217, validation loss 0.59756887
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-07 01:08:27.331991] Completed epoch 10 with training loss 0.59682804, validation loss 0.60117596
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:    5, 0.5974
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.89it/s]
Loss: 0.602432232350111
AUROC: 0.5283935346160518
AUPRC: 0.2998615193706461
Sensitivity: 0.5294117647058824
Specificity: 0.498589562764457
Threshold: 0.34
Accuracy:  0.5073269327943406

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.57it/s]
Loss: 0.5913195398118761
AUROC: 0.5061379398943799
AUPRC: 0.2746925544878628
Sensitivity: 0.5326160053799597
Specificity: 0.46315287383734793
Threshold: 0.34
Accuracy:  0.4813380281690141

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.600011769682169
AUROC: 0.5295047881955706
AUPRC: 0.30073677255865217
Sensitivity: 0.49732620320855614
Specificity: 0.5380818053596615
Threshold: 0.34
Accuracy:  0.5265285497726124

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.61it/s]
Loss: 0.588995666636361
AUROC: 0.5092935980180244
AUPRC: 0.27742076870870214
Sensitivity: 0.484196368527236
Specificity: 0.5177677080849034
Threshold: 0.34
Accuracy:  0.5089788732394366

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.598862774670124
AUROC: 0.5326198180259409
AUPRC: 0.30264747906298506
Sensitivity: 0.5222816399286988
Specificity: 0.5190409026798307
Threshold: 0.33
Accuracy:  0.5199595755432036

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.51it/s]
Loss: 0.5866238276163737
AUROC: 0.5119944840337379
AUPRC: 0.28001582802020253
Sensitivity: 0.5232010759919301
Specificity: 0.488194610064393
Threshold: 0.33
Accuracy:  0.49735915492957744

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.6004278436303139
AUROC: 0.5350019736064705
AUPRC: 0.3048151389884632
Sensitivity: 0.5329768270944741
Specificity: 0.5035260930888575
Threshold: 0.34
Accuracy:  0.5118746841839312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5892277446058062
AUROC: 0.5137541497654127
AUPRC: 0.2811195788684959
Sensitivity: 0.5400134498991258
Specificity: 0.468161221082757
Threshold: 0.34
Accuracy:  0.4869718309859155

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5982490740716457
AUROC: 0.5375789756856711
AUPRC: 0.30676613891155996
Sensitivity: 0.5436720142602496
Specificity: 0.501410437235543
Threshold: 0.33
Accuracy:  0.5133906013137949

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.49it/s]
Loss: 0.5861389636993408
AUROC: 0.5152335424381527
AUPRC: 0.2820608909616521
Sensitivity: 0.5480833893745797
Specificity: 0.46434533746720724
Threshold: 0.33
Accuracy:  0.48626760563380284

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.5986712984740734
AUROC: 0.5405525846702317
AUPRC: 0.309468131672177
Sensitivity: 0.5614973262032086
Specificity: 0.4936530324400564
Threshold: 0.33
Accuracy:  0.5128852956038403

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5859550409846835
AUROC: 0.5177826239043488
AUPRC: 0.283438532622054
Sensitivity: 0.5655682582380632
Specificity: 0.4509897448127832
Threshold: 0.33
Accuracy:  0.48098591549295777

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.6024825684726238
AUROC: 0.5420221043924687
AUPRC: 0.31062730814139505
Sensitivity: 0.5490196078431373
Specificity: 0.5148095909732017
Threshold: 0.34
Accuracy:  0.5245073269327943

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.50it/s]
Loss: 0.5882081303331587
AUROC: 0.5190592095481774
AUPRC: 0.2839002457569345
Sensitivity: 0.5427034297242771
Specificity: 0.4757929883138564
Threshold: 0.34
Accuracy:  0.4933098591549296

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5985492020845413
AUROC: 0.544905204035711
AUPRC: 0.3129451798466102
Sensitivity: 0.5953654188948306
Specificity: 0.47249647390691113
Threshold: 0.33
Accuracy:  0.5073269327943406

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5858184681998359
AUROC: 0.5220054046589642
AUPRC: 0.28503911576504437
Sensitivity: 0.4485541358439812
Specificity: 0.5816837586453614
Threshold: 0.34
Accuracy:  0.546830985915493

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5991595201194286
AUROC: 0.546888867099603
AUPRC: 0.3144229902259123
Sensitivity: 0.483065953654189
Specificity: 0.57475317348378
Threshold: 0.34
Accuracy:  0.5487620010106115

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.50it/s]
Loss: 0.5861309455500708
AUROC: 0.5240599384987084
AUPRC: 0.285874862954351
Sensitivity: 0.4754539340954943
Specificity: 0.5583114715001193
Threshold: 0.34
Accuracy:  0.5366197183098591

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.598231378942728
AUROC: 0.5507845399988435
AUPRC: 0.31773024404525485
Sensitivity: 0.5828877005347594
Specificity: 0.49647390691114246
Threshold: 0.33
Accuracy:  0.5209701869631127

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.49it/s]
Loss: 0.5846303125222524
AUROC: 0.528046552112104
AUPRC: 0.2878757809370066
Sensitivity: 0.589778076664425
Specificity: 0.4476508466491772
Threshold: 0.33
Accuracy:  0.4848591549295775

Intermediate Model:
  ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.6023309081792831
AUROC: 0.5553250919549766
AUPRC: 0.32138043652701453
Sensitivity: 0.6024955436720143
Specificity: 0.4788434414668547
Threshold: 0.34
Accuracy:  0.5138959070237493

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5876242339611053
AUROC: 0.532407745255767
AUPRC: 0.28997290662097497
Sensitivity: 0.6119704102219233
Specificity: 0.4333412830908657
Threshold: 0.34
Accuracy:  0.4801056338028169


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:       5, 0.5974
  Epoch with best model Test AUROC:     10, 0.5324
  Epoch with best model Test Accuracy:   7, 0.5468

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:    5, 0.5974
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0005.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.52it/s]
Loss: 0.5859550409846835
AUROC: 0.5177826239043488
AUPRC: 0.283438532622054
Sensitivity: 0.5655682582380632
Specificity: 0.4509897448127832
Threshold: 0.33
Accuracy:  0.48098591549295777
best_model_val_test_auroc: 0.5177826239043488
best_model_val_test_auprc: 0.283438532622054

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  10, 0.5324
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_4c864fa1_0010.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:17<00:00,  2.50it/s]
Loss: 0.5876242339611053
AUROC: 0.532407745255767
AUPRC: 0.28997290662097497
Sensitivity: 0.6119704102219233
Specificity: 0.4333412830908657
Threshold: 0.34
Accuracy:  0.4801056338028169
best_model_auroc_test_auroc: 0.532407745255767
best_model_auroc_test_auprc: 0.28997290662097497

Total Processing Time: 892.0080 sec
Experiment Setup
  name:              ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           True
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        80
  patience:          5
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=96, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.59it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.28it/s]
[2024-05-07 01:16:31.524703] Completed epoch 0 with training loss 0.52639836, validation loss 0.58074826
Validation loss improved to 0.58074826. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-07 01:17:35.869605] Completed epoch 1 with training loss 0.44737223, validation loss 0.58147931
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-07 01:18:40.242678] Completed epoch 2 with training loss 0.44076508, validation loss 0.73376560
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
[2024-05-07 01:19:44.378210] Completed epoch 3 with training loss 0.43887845, validation loss 0.67033601
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-07 01:20:48.618673] Completed epoch 4 with training loss 0.43838859, validation loss 0.55007267
Validation loss improved to 0.55007267. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.61it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-07 01:21:52.543529] Completed epoch 5 with training loss 0.43939537, validation loss 0.56091070
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-07 01:22:56.518504] Completed epoch 6 with training loss 0.43851143, validation loss 0.55610096
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-07 01:24:00.664017] Completed epoch 7 with training loss 0.43940657, validation loss 0.53840137
Validation loss improved to 0.53840137. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-07 01:25:04.932289] Completed epoch 8 with training loss 0.43743914, validation loss 0.57761520
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.61it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-07 01:26:08.788445] Completed epoch 9 with training loss 0.43525386, validation loss 0.54842108
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-07 01:27:12.904760] Completed epoch 10 with training loss 0.43928981, validation loss 0.57662153
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-07 01:28:16.935445] Completed epoch 11 with training loss 0.43535575, validation loss 0.55775928
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
[2024-05-07 01:29:21.148126] Completed epoch 12 with training loss 0.43659198, validation loss 0.61480558
No improvement in validation loss. 5 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:    7, 0.5384
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5799797773361206
AUROC: 0.8367588604873928
AUPRC: 0.7101833633432707
Sensitivity: 0.7754010695187166
Specificity: 0.7489421720733427
Threshold: 0.15
Accuracy:  0.7564426478019202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.41it/s]
Loss: 0.5338995605707169
AUROC: 0.8330409779260307
AUPRC: 0.668422514844096
Sensitivity: 0.7457969065232011
Specificity: 0.7784402575721441
Threshold: 0.16
Accuracy:  0.7698943661971831

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5807769726961851
AUROC: 0.8406708753510381
AUPRC: 0.7224867829532884
Sensitivity: 0.7807486631016043
Specificity: 0.7447108603667136
Threshold: 0.14
Accuracy:  0.7549267306720566

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5390844096740087
AUROC: 0.8367637579589128
AUPRC: 0.6770983652207649
Sensitivity: 0.7505043712172159
Specificity: 0.7805866921058908
Threshold: 0.15
Accuracy:  0.7727112676056338

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.7320067323744297
AUROC: 0.8405891655290145
AUPRC: 0.7281953231353923
Sensitivity: 0.7843137254901961
Specificity: 0.736953455571227
Threshold: 0.06
Accuracy:  0.7503789792824659

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.39it/s]
Loss: 0.6705888913737403
AUROC: 0.8349149982734538
AUPRC: 0.6773400950517435
Sensitivity: 0.7256220578345662
Specificity: 0.7977581683758646
Threshold: 0.07
Accuracy:  0.7788732394366197

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.667802881449461
AUROC: 0.8422258761178532
AUPRC: 0.7289273390818912
Sensitivity: 0.7664884135472371
Specificity: 0.7588152327221439
Threshold: 0.08
Accuracy:  0.7609903991915109

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.44it/s]
Loss: 0.6164307538006041
AUROC: 0.8369737823198141
AUPRC: 0.6790404042584653
Sensitivity: 0.7841291190316073
Specificity: 0.7409968995945624
Threshold: 0.08
Accuracy:  0.7522887323943662

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.549085970968008
AUROC: 0.84291475277122
AUPRC: 0.7268665286450822
Sensitivity: 0.7825311942959001
Specificity: 0.736953455571227
Threshold: 0.16
Accuracy:  0.7498736735725113

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.42it/s]
Loss: 0.512769259346856
AUROC: 0.8378713778416038
AUPRC: 0.6791776969865339
Sensitivity: 0.7652992602555481
Specificity: 0.7672310994514667
Threshold: 0.17
Accuracy:  0.766725352112676

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.5627186503261328
AUROC: 0.842918523993775
AUPRC: 0.7274001864624879
Sensitivity: 0.7664884135472371
Specificity: 0.7637517630465445
Threshold: 0.15
Accuracy:  0.7645275391611925

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5251607143216663
AUROC: 0.8378076247423611
AUPRC: 0.6794173596857823
Sensitivity: 0.7834566240753195
Specificity: 0.7419508704984498
Threshold: 0.15
Accuracy:  0.7528169014084507

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5523661710321903
AUROC: 0.8427764746108727
AUPRC: 0.7262454776705706
Sensitivity: 0.7593582887700535
Specificity: 0.7750352609308886
Threshold: 0.15
Accuracy:  0.7705912076806468

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.40it/s]
Loss: 0.5168530699279573
AUROC: 0.837453333934243
AUPRC: 0.6790547455496074
Sensitivity: 0.7740416946872899
Specificity: 0.7536370140710709
Threshold: 0.15
Accuracy:  0.7589788732394366

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5362123604863882
AUROC: 0.8428556702845261
AUPRC: 0.727041237561008
Sensitivity: 0.7736185383244206
Specificity: 0.7475317348377997
Threshold: 0.17
Accuracy:  0.7549267306720566

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.41it/s]
Loss: 0.5017197387086021
AUROC: 0.8373587066925998
AUPRC: 0.6787206611951957
Sensitivity: 0.7632817753866846
Specificity: 0.7691390412592416
Threshold: 0.18
Accuracy:  0.7676056338028169

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5814458392560482
AUROC: 0.8417444167050074
AUPRC: 0.7268721980708615
Sensitivity: 0.7450980392156863
Specificity: 0.7912552891396333
Threshold: 0.13
Accuracy:  0.7781707933299646

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5402260005474091
AUROC: 0.8360201321862374
AUPRC: 0.6782396017359572
Sensitivity: 0.7612642905178211
Specificity: 0.7648461721917481
Threshold: 0.13
Accuracy:  0.7639084507042253

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5469531565904617
AUROC: 0.841662706882984
AUPRC: 0.7252989995149405
Sensitivity: 0.7593582887700535
Specificity: 0.7729196050775741
Threshold: 0.16
Accuracy:  0.7690752905507833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5124780840343899
AUROC: 0.8360372933978574
AUPRC: 0.6781292195936635
Sensitivity: 0.7807666442501682
Specificity: 0.7426663486763654
Threshold: 0.16
Accuracy:  0.7526408450704225

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.5755668040364981
AUROC: 0.84107816738697
AUPRC: 0.726691040489243
Sensitivity: 0.7629233511586453
Specificity: 0.7679830747531735
Threshold: 0.13
Accuracy:  0.7665487620010106

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.44it/s]
Loss: 0.5327484531535043
AUROC: 0.8352259850896337
AUPRC: 0.6770406580763337
Sensitivity: 0.7404169468728985
Specificity: 0.7839255902694968
Threshold: 0.14
Accuracy:  0.7725352112676056

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.5602828487753868
AUROC: 0.840540768172893
AUPRC: 0.7238260426202713
Sensitivity: 0.7450980392156863
Specificity: 0.7877291960507757
Threshold: 0.15
Accuracy:  0.775644264780192

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5211072756184472
AUROC: 0.8346915015595051
AUPRC: 0.676782961167984
Sensitivity: 0.7639542703429725
Specificity: 0.7562604340567612
Threshold: 0.15
Accuracy:  0.7582746478873239

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.6010407041758299
AUROC: 0.8402623262409209
AUPRC: 0.7266724028563193
Sensitivity: 0.7771836007130125
Specificity: 0.7390691114245416
Threshold: 0.1
Accuracy:  0.7498736735725113

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5601348476277457
AUROC: 0.8340921422340465
AUPRC: 0.6760359799695332
Sensitivity: 0.7484868863483524
Specificity: 0.7710469830670165
Threshold: 0.11
Accuracy:  0.7651408450704226


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:       7, 0.5384
  Epoch with best model Test AUROC:      4, 0.8379
  Epoch with best model Test Accuracy:   2, 0.7789

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:    7, 0.5384
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0007.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.43it/s]
Loss: 0.5017197387086021
AUROC: 0.8373587066925998
AUPRC: 0.6787206611951957
Sensitivity: 0.7632817753866846
Specificity: 0.7691390412592416
Threshold: 0.18
Accuracy:  0.7676056338028169
best_model_val_test_auroc: 0.8373587066925998
best_model_val_test_auprc: 0.6787206611951957

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:   4, 0.8379
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_49fd5c1e_0004.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:18<00:00,  2.40it/s]
Loss: 0.512769259346856
AUROC: 0.8378713778416038
AUPRC: 0.6791776969865339
Sensitivity: 0.7652992602555481
Specificity: 0.7672310994514667
Threshold: 0.17
Accuracy:  0.766725352112676
best_model_auroc_test_auroc: 0.8378713778416038
best_model_auroc_test_auprc: 0.6791776969865339

Total Processing Time: 1213.4780 sec

Hyperparameter search¶

Batch size¶

Holding all other parameters fixed, sweep the batch sizes from 16 to 256:

In [104]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

batch_sizes = [
    [16, 32, 64, 128, 256]
]

if ENABLE_EXPERIMENT:
    for batch_size in batch_sizes:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=batch_size,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )

        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Learning Rate¶

Holding all other parameters fixed, sweep the learning rate from 1e-2 to 1e-4:

In [105]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

learning_rates = [
    1e-4, 1e-3, 1e-2
]

if ENABLE_EXPERIMENT:
    for learning_rate in learning_rates:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=learning_rate,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Weight decay¶

Holding all other parameters fixed, sweep the weight decay from 1e-3 to 1e0:

In [106]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

weight_decays = [
    1e-3, 1e-2, 1e-1, 1e0
]

if ENABLE_EXPERIMENT:
    for weight_decay in weight_decays:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=weight_decay,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Label balance¶

Holding all other parameters fixed, sweep the pos_weight in BCEWithLogitsLoss from 2 to 4:

In [107]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

pos_weights = [
    2.0, 4.0
]

if ENABLE_EXPERIMENT:
    for pos_weight in pos_weights:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=pos_weight,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Ablations¶

Holding all other parameters fixed, perform ablations on the following parameters:

  • Number of Residual Blocks (6, 1)
  • Skip Connection
In [108]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

ablations = [
    # nResiduals, skip_connection
    [6, False],
    [1, False],
    [12, True]
]

if ENABLE_EXPERIMENT:
    for (nResiduals, skip_connection) in ablations:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=nResiduals, 
            skip_connection=skip_connection,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Evaluation¶

Metric description¶

As in the original paper, model performance will be evaluated on the following metrics:

  • AUROC: Area Under Receiver Operating Curve. This is a measure of the model's ability to distinguish between positive and negative classes. The curve is generated by plotting the true positive rate (sensitivity) against the false positive rate (1-specificity) at various threshold settings, and the area under this curve is calculated. Higher values are indicative of better model performance.
  • AUPRC: Area Under Precision Recall Curve. This is a measure of the model's ability to balance precision and recall. The curve is generated by plotting precision against recall at various threshold settings, and the area under this curve is calculated. Higher values are indicative of better model performance.
  • Sensitivity: The proportion of true positive cases that are correctly identified by the model, as a fraction of all true cases. Higher values are indicative of better model performance.
  • Specificity: The proportion of true negative cases that are correctly identified by the model, as a fraction of all true negative cases. Higher values are indicative of better model performance.
  • Threshold: This is not strictly an evaluation metric, but is reported as the threshold value which minimizes the difference between the sensitivity and specificity.

Model evaluation¶

Calculate performance metrics on pre-trained models:

In [109]:
ENABLE_VALIDATION = True

validate_models = [
    # prediction window, useAbp, useEeg, useEcg, model path
    # 3-minute models
    [3, os.path.join('pretrained', 'abp_3min_f386500f.model')],
    [3, os.path.join('pretrained', 'ecg_3min_9888ba74.model')],
    [3, os.path.join('pretrained', 'eeg_3min_6e41ecbf.model')],
    [3, os.path.join('pretrained', 'abp_ecg_3min_4c033450.model')],
    [3, os.path.join('pretrained', 'abp_eeg_3min_a25c1edf.model')],
    [3, os.path.join('pretrained', 'eeg_ecg_3min_24df69ca.model')],
    [3, os.path.join('pretrained', 'abp_eeg_ecg_3min_bea05a31.model')],
    # 5-minute models
    [5, os.path.join('pretrained', 'abp_5min_f4919819.model')],
    [5, os.path.join('pretrained', 'ecg_5min_f5345149.model')],
    [5, os.path.join('pretrained', 'eeg_5min_8970a5eb.model')],
    [5, os.path.join('pretrained', 'abp_ecg_5min_6306c305.model')],
    [5, os.path.join('pretrained', 'abp_eeg_5min_482fd843.model')],
    [5, os.path.join('pretrained', 'eeg_ecg_5min_3885bb9f.model')],
    [5, os.path.join('pretrained', 'abp_eeg_ecg_5min_5ab3f8eb.model')],
    # 10-minute models
    [10, os.path.join('pretrained', 'abp_10min_7661baf5.model')],
    [10, os.path.join('pretrained', 'ecg_10min_49dc88bd.model')],
    [10, os.path.join('pretrained', 'eeg_10min_90d4cdb5.model')],
    [10, os.path.join('pretrained', 'abp_ecg_10min_009ed9f2.model')],
    [10, os.path.join('pretrained', 'abp_eeg_10min_ff7c129d.model')],
    [10, os.path.join('pretrained', 'eeg_ecg_10min_e34ef1f5.model')],
    [10, os.path.join('pretrained', 'abp_eeg_ecg_10min_198d1d84.model')],
    # 15-minute models
    [15, os.path.join('pretrained', 'abp_15min_61321b51.model')],
    [15, os.path.join('pretrained', 'ecg_15min_3ac4acf1.model')],
    [15, os.path.join('pretrained', 'eeg_15min_acd313eb.model')],
    [15, os.path.join('pretrained', 'abp_ecg_15min_ad0d8b9b.model')],
    [15, os.path.join('pretrained', 'abp_eeg_15min_4c527f9b.model')],
    [15, os.path.join('pretrained', 'eeg_ecg_15min_2bb1d44d.model')],
    [15, os.path.join('pretrained', 'abp_eeg_ecg_15min_10e6e48b.model')],
]

if ENABLE_VALIDATION:
    #test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64)
    #loss_func = nn.BCELoss()
    for pred_window, model_path in validate_models:
        if pred_window == PREDICTION_WINDOW:
            print()
            print(f"Prediction Window: {pred_window}, Model: {model_path}")
            test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64)
            loss_func = nn.BCELoss()
            model = torch.load(model_path)
            eval_model(model, device, test_loader, loss_func, print_detailed = False)
Prediction Window: 3, Model: pretrained/abp_3min_f386500f.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:18<00:00,  4.75it/s]
Loss: 0.42744589036100367
AUROC: 0.8438336157983227
AUPRC: 0.6797624284221427
Sensitivity: 0.7612642905178211
Specificity: 0.7662771285475793
Threshold: 0.24
Accuracy:  0.7649647887323944

Prediction Window: 3, Model: pretrained/ecg_3min_9888ba74.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:18<00:00,  4.90it/s]
Loss: 0.5888826719830546
AUROC: 0.5267733345565374
AUPRC: 0.2885596977859124
Sensitivity: 0.7249495628782784
Specificity: 0.3026472692582876
Threshold: 0.34
Accuracy:  0.41320422535211265

Prediction Window: 3, Model: pretrained/eeg_3min_6e41ecbf.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:17<00:00,  5.07it/s]
Loss: 0.5595826807986485
AUROC: 0.6204938387240655
AUPRC: 0.34348768827369497
Sensitivity: 0.6018829858776059
Specificity: 0.5840686859050799
Threshold: 0.28
Accuracy:  0.5887323943661972

Prediction Window: 3, Model: pretrained/abp_ecg_3min_4c033450.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:19<00:00,  4.60it/s]
Loss: 0.4282352924346924
AUROC: 0.8441931191239891
AUPRC: 0.6803942916054424
Sensitivity: 0.7686617350369872
Specificity: 0.7543524922489864
Threshold: 0.23
Accuracy:  0.7580985915492958

Prediction Window: 3, Model: pretrained/abp_eeg_3min_a25c1edf.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:18<00:00,  4.75it/s]
Loss: 0.42441082737419045
AUROC: 0.8412669721576183
AUPRC: 0.6793720377608912
Sensitivity: 0.7612642905178211
Specificity: 0.7696160267111853
Threshold: 0.26
Accuracy:  0.7674295774647887

Prediction Window: 3, Model: pretrained/eeg_ecg_3min_24df69ca.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:18<00:00,  4.83it/s]
Loss: 0.5797199489695303
AUROC: 0.5818320347214615
AUPRC: 0.31564060940003297
Sensitivity: 0.5837256220578345
Specificity: 0.5306463152873837
Threshold: 0.34
Accuracy:  0.5445422535211267

Prediction Window: 3, Model: pretrained/abp_eeg_ecg_3min_bea05a31.model
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89/89 [00:19<00:00,  4.52it/s]
Loss: 0.4218234568834305
AUROC: 0.8413591936219315
AUPRC: 0.6769803378024059
Sensitivity: 0.7706792199058508
Specificity: 0.7586453613164799
Threshold: 0.28
Accuracy:  0.7617957746478873

Model prediction¶

Use the model to predict the chance of an IOH event for real cases.

In [110]:
PERFORM_PREDICTIONS = True
PERFORM_PREDICTION_FIRST_ONLY = True

# NOTE: This is always set so that if earlier checks were enabled, the earlier data will be reused.
my_cases_of_interest_idx = [84, 198, 60, 16, 27]

if PERFORM_PREDICTION_FIRST_ONLY:
    my_cases_of_interest_idx = [84]

if PERFORM_PREDICTIONS:
    positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = \
        extract_segments(my_cases_of_interest_idx, debug=False,
                         checkCache=False, forceWrite=False, returnSegments=True,
                         skipInvalidCleanEvents=SKIP_INVALID_CLEAN_EVENTS,
                         skipInvalidIohEvents=SKIP_INVALID_IOH_EVENTS)

    for pred_window, model_path in validate_models:
        if pred_window == PREDICTION_WINDOW:
            for case_id_to_check in my_cases_of_interest_idx:
                print()
                print(f'Model Predictions - Case {case_id_to_check} for {pred_window} Minute Prediction Window')
                print(f'Model: {model_path}')
                printAbpOverlay(case_id_to_check, positiveSegmentsMap, 
                            negativeSegmentsMap, iohEventsMap, cleanEventsMap, movingAverage=False)

                ready_model = torch.load(model_path)
                preds = predictionsForModel(case_id_to_check, None, None, device, ready_model=ready_model)

                printModelPrediction(case_id_to_check, preds, None)

                if PERFORM_PREDICTION_FIRST_ONLY:
                    break
84: positiveSegments: 4, negativeSegments: 15

Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/abp_3min_f386500f.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/ecg_3min_9888ba74.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/eeg_3min_6e41ecbf.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/abp_ecg_3min_4c033450.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/abp_eeg_3min_a25c1edf.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/eeg_ecg_3min_24df69ca.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Model Predictions - Case 84 for 3 Minute Prediction Window
Model: pretrained/abp_eeg_ecg_3min_bea05a31.model
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688

Results¶

We were able to run all of the same experiments as the authors of the original paper, though we were not able to fully replicate their results. In addition, were were able to run a few ablation studies to quantify the impact of the number of residual blocks and the skip connection path in the model.

Our complete table of experimental results is show below in Table 1:

Waveform AUROC AUPRC Sensitivity Specificity Threshold
Time to event: 3 min
ABP 0.8348665138 0.6827477064 0.7586206897 0.7586916743 0.24
ECG 0.5172308631 0.2777976524 0.6864623244 0.3392040256 0.34
EEG 0.5765847539 0.3108817974 0.5504469987 0.5562671546 0.28
ABP + ECG 0.8350885234 0.6822426973 0.7618135377 0.7463403477 0.23
ABP + EEG 0.8333041215 0.6830001845 0.7573435504 0.7602927722 0.26
ECG + EEG 0.5634548602 0.3061094980 0.5932311622 0.5166971638 0.34
ABP + ECG + EEG 0.8327256552 0.6812673236 0.7541507024 0.7653247941 0.29
Time to event: 5 min
ABP 0.8001353845 0.6089914018 0.7172413793 0.7268053283 0.19
ECG 0.5408307613 0.2789858266 0.9089655172 0.0983874737 0.30
EEG 0.5889406162 0.3209396685 0.5606896552 0.5814442627 0.26
ABP + ECG 0.7980903530 0.6008434537 0.7234482759 0.7120822622 0.17
ABP + EEG 0.7932250526 0.6046273056 0.7020689655 0.7289086235 0.22
ECG + EEG 0.5959877026 0.3278444283 0.6020689655 0.5461556438 0.30
ABP + ECG + EEG 0.7912796254 0.6016009768 0.7151724138 0.7193269455 0.25
Time to event: 10 min
ABP 0.7417550791 0.4515207123 0.6550802139 0.7046703297 0.18
ECG 0.4859654235 0.1971933307 0.8609625668 0.1252289377 0.22
EEG 0.5929758558 0.2583727183 0.5695187166 0.5592948718 0.24
ABP + ECG 0.7434999641 0.4485223920 0.7058823529 0.6572802198 0.17
ABP + EEG 0.7456936446 0.4482909599 0.6773618538 0.6893315018 0.17
ECG + EEG 0.5900167031 0.2531979287 0.5989304813 0.5283882784 0.23
ABP + ECG + EEG 0.7433881478 0.4513591475 0.6951871658 0.6668956044 0.16
Time to event: 15 min
ABP 0.7350525214 0.3629148943 0.6534772182 0.6929577465 0.14
ECG 0.4958383997 0.1685094385 0.3944844125 0.6157276995 0.22
EEG 0.5681875626 0.1976809641 0.5935251799 0.5009389671 0.21
ABP + ECG 0.7377326308 0.3649217753 0.6642685851 0.6786384977 0.15
ABP + EEG 0.7364418324 0.3626843809 0.6678657074 0.6732394366 0.15
ECG + EEG 0.5763017755 0.2000958414 0.5071942446 0.6140845070 0.18
ABP + ECG + EEG 0.7344424460 0.3624089403 0.6906474820 0.6593896714 0.15

Table 1: Area under the Receiver-operating Characteristic Curve, Area under the Precision-Recall Curve, Sensitivity, and Specificity of the model in predicting intraoperative hypotension

For comparison, the results in the original paper are shown below as Table 2: Area under the Receiver-operating Characteristic Curve, Area under the Precision-Recall Curve, Sensitivity, and Specificity of the model in predicting intraoperative hypotension Table 2: Area under the Receiver-operating Characteristic Curve, Area under the Precision-Recall Curve, Sensitivity, and Specificity of the model in predicting intraoperative hypotension in the original paper

Hypotheses¶

The original paper investigated the following hypotheses:

  1. Hypothesis 1: A model using ABP and ECG will outperform a model using ABP alone in predicting IOH.
  2. Hypothesis 2: A model using ABP and EEG will outperform a model using ABP alone in predicting IOH.
  3. Hypothesis 3: A model using ABP, EEG, and ECG will outperform a model using ABP alone in predicting IOH.

As seen in the results in Table 1, our results were very noisy and unable to prove or disprove any of the three hypotheses. The results were not consistent across the prediction windows or metrics, and the performance was not as high as in the original paper.

Hyperparameters¶

We performed a hyperparameter search across batch size, learning rate, weight decay, and label balancing. The results are shown below in Table 3:

Hyperparameter Search range Optimum value
Batch size 16, 32, 64, 128, 256 128
Learning rate 1e-4, 1e-3, 1e-2 1e-4
Weight decay 1e-3, 1e-2, 1e-1, 1e0 1-e1
Label balance 1.0, 2.0, 4.0 1.0 (disabled)

Table 3: Hyperparameter search ranges and optimum values

The experimental data supporting these results can be found in Supplemental Table 1 - Hyperparameter exploration.

Ablation Study¶

We performed an ablation study of the number of residual blocks and presence of skip connections. The original model configuration was found to have better performance than the ablated models, showing that the original model features contribute positively to the model performance and their removal results in a qualitatively worse result.

The experimental data supporting these results can be found in Supplemental Table 2 - Ablation study

Computational requirements¶

Training and evaluation was run on three different machines as shown in Table 4 below:

Machine Processor System RAM GPU GPU RAM Device
Macbook Pro M1 32 GB Integrated Shared with system mps
Macbook Pro M3 Pro 36 GB Integrated Shared with system mps
Desktop PC AMD Ryzen 7 3800X 32 GB GTX 2070 Super 8 GB cuda

Table 4: Specifications of machines used in training and evaluation

Typical runtime on a Macbook Pro is on the order of 2-3 minutes per epoch, and a typical experiment runs for 20-60 epochs. Including post-training evaluations, a typical experiment takes 90-360 minutes.

We estimate that we spent a total of 350 GPU hours training across all experiments.

Discussion¶

Implications of experimental results¶

Reproducibility of original paper¶

Although we were not able to achieve the same performance level as the original paper, we were able to perform all of the same experiments and test the hypotheses. The results of our experiments were ultimately consistent with the original paper, or consistent from prediction window to prediction window, or metric to metric, and we were unable to prove or disprove any of the three hypotheses. The hyperparameter search and ablation study results were consistent with the original paper.

Using our best models, we were able to generate predictions for cases and then plot the predicted values (probability of an IOH event) and compare it to to mean arterial pressure (MAP) of the raw ABP waveform. The predictions would increase prior to the MAP beginning to drop and decrease as the MAP recovered. For sustained periods of MAP below 65 mmHg the model predictions would become high and remain high for the duration of the IOH event. These results were consistent with similar prediction plots presented in Figure 3 of the original paper.

We believe that the discrepancy in performance between our results and the original paper is mostly due to the differences in datasets and data preprocessing. The original paper used a dataset with 39,600 cases, of which 14,140 met the inclusion and exclusion critera and were used for training. The authors released a much smaller dataset publically with only 6,388 cases, of which 2,763 met the inclusion and exclusion criteria. This smaller dataset provided less data for training and validation, which likely impacted the model's performance.

The original paper also used a signal quality index to filter out low-quality data, which we were not able to implement. This likely introduced noise into our dataset, which likely impacted the model's performance.

The authors of the original paper also used a different data preprocessing pipeline than we did, and did not precisely document it. This lead us to try various data preprocessing methods to try to match the original paper's results, but we were not able to achieve the same performance levels.

Factors affecting reproducibility¶

Low difficulty¶

The most straightforward aspects of this project were the data download and the model implementation. Specific areas were we encountered low difficulty:

  • The data download was straightforward, as the dataset was available through a convenient API through Python library as well as being published to Physionet for download.
  • The model architecture was generally clearly defined in the original paper with an included architecture diagram. The hyperparameters were provided in a supplemental table. These features made it easy to implement the model in PyTorch.

High difficulty¶

The most difficult aspects of this project all involved the data preprocessing stage. This is the most impactful part of the data pipeline and it was not fully documented in the original paper. Some areas were we encountered difficulty:

  • The source data is unlabelled, so our team was responsible for implementing analysis methods for identifying positive (IOH event occurred) and negative (IOH event did not occur) by running a lookahead analysis of our input training set. The original paper was not precise on how this was done, so we had to make some assumptions.
  • The volume of raw data is in excess of 90GB. A non-trivial amount of compute was required to minify the input data to only include the data tracks of interest to our experiments (i.e., ABP, ECG, and EEG tracks).
  • We found it difficult to trace back to the definition of the "jSQI" signal quality index referenced in the paper. Multiple references through multiple papers needed to be traversed to understand which variant of the quality index
    • The only available source code related to the signal quality index as referenced by our paper in [5]. Source code was not directly linked from the paper, but the GitHub repository for the corresponding author for reference [5] did result in the identification of MATLAB source code for the signal quality index as described in the referenced paper. That code is available at https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox/tree/master/Tools/BP_Tools [7]
    • Our team had insufficient time to port this signal quality index to Python for use in our investigation, or to setup a MATLAB environment in which to assess our source data using the above MATLAB functions. This is a potential source of noise in our dataset, as the signal quality index was used to filter out low-quality data in the original paper, and we were unable to do the same.

Unknowns¶

One aspect of our results that we were unable to explain was why our threshold values were lower than expected. In the original paper, the threshold is chosen in order to minimize the difference between sensitivity and specificity, and we applied an algorithm to achieve this goal. However, in the original paper, the thresholds were between 0.30 and 0.62 while our results were between 0.14 and 0.34. We posited that this was due to the label imbalance (4 positive labels : 1 negative label) and performed experiments comparing different pos_weight values using the BCEWithLogitsLoss loss function with the BCELoss loss function fro the original paper. However, this did not yield better, or indeed any usable results.

Suggestions to original authors¶

Our main suggestion to the original authors would be to release their code publically and to provide more detailed documentation of the data preprocessing pipeline. This would help future researchers to reproduce the results more accurately. Specifically, the authors should provide more information on how the signal quality index was calculated and used to filter out low-quality data.

We would also suggestion correcting the hyperparameters published in Supplemental Table 1. Specifically, the output size for residual blocks 11 and 12 for the ECG and ABP data sets was 496x6. This is a typo, and should read 469x6. This typo became apparent when operating the size down operation within Residual Block 11 and recognizing the tensor dimensions were misaligned.

Additionally, more explicit references to the signal quality index assessment tools should be added. Our team could not find a reference to the MATLAB source code as described in reference [3], and had to manually discover the GitHub profile for the lab of the corresponding author of reference [3] in order to find MATLAB source that corresponded to the metrics described therein.

Future work¶

In future work, we would like to implement the signal quality index and use it to filter out low-quality data. We would also like to experiment with additional data preprocessing techniques and pre-filtered datasets such as PulseDB: a cleaned dataset based on MIMIC-III and VitalDB. Further, we would like to experiment with different model architectures and hyperparameters to see if we can improve the model's performance. Finally, we would like to run the models with different seeds to create a model ensemble in order to smooth some of the noise in our results.

References¶

  1. Jo Y-Y, Jang J-H, Kwon J-m, Lee H-C, Jung C-W, Byun S, et al. “Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study.” PLoS ONE, (2022) 17(8): e0272055 https://doi.org/10.1371/journal.pone.0272055
  2. Hatib, Feras, Zhongping J, Buddi S, Lee C, Settels J, Sibert K, Rhinehart J, Cannesson M “Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis” Anesthesiology (2018) 129:4 https://doi.org/10.1097/ALN.0000000000002300
  3. Bao, X., Kumar, S.S., Shah, N.J. et al. "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial." Perioperative Medicine (2024) 13:13 https://doi.org/10.1186/s13741-024-00369-9
  4. Lee, HC., Park, Y., Yoon, S.B. et al. VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients. Sci Data 9, 279 (2022). https://doi.org/10.1038/s41597-022-01411-5
  5. Li Q., Mark R.G. & Clifford G.D. "Artificial arterial blood pressure artifact models and an evaluation of a robust blood pressure and heart rate estimator." BioMed Eng OnLine. (2009) 8:13. pmid:19586547 https://doi.org/10.1186/1475-925X-8-13
  6. Park H-J, "VitalDB Python Example Notebooks" GitHub Repository https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb
  7. Vest A, Da Poian G, Li Q, Liu C, Nemati S, Shah A, Clifford GD, "An Open Source Benchmarked Toolbox for Cardiovascular Waveform and Interval Analysis", Physiological measurement 39, no. 10 (2018): 105004. DOI:10.5281/zenodo.1243111; 2018.
In [111]:
time_delta = np.round(timer() - global_time_start, 3)
print(f'Total Notebook Processing Time: {time_delta:.4f} sec')
Total Notebook Processing Time: 11026.6000 sec